Breaking News Stories

California is racing to combat deepfakes ahead of the election

Just days after Vice President Kamala Harris announced her candidacy for president, a video created with the help of artificial intelligence went viral.

“Now that Joe Biden has finally shown his senility in the debates, I am the Democratic nominee for president,” a voice sounding like Harris said in a fake audio track used to doctor Harris' campaign ads. “I was selected because I am the ultimate diversity hire.”

Billionaire Elon Musk, a supporter of Harris' Republican opponent, former President Trump, shared the video on X, then clarified two days later that it was, in fact, a parody. His first tweet was viewed 136 million times; a follow-up post calling the video a parody was viewed 26 million times.

Including Democrats California Governor Gavin NewsomBut the incident is no laughing matter, and it has sparked calls for greater regulation to combat AI-generated videos with political messages, as well as renewed debate about the appropriate role of government in reining in emerging technologies.

The California Assembly on Friday gave final approval to a bill that would ban the distribution of false election ads and “election communications” within 120 days of an election. Assembly Bill 2839 It targets manipulated content that undermines a candidate's reputation, his election prospects or confidence in the election results, and is intended to address content like the Harris video shared by Musk, but makes exceptions for parody or satire.

“California will be facing its first election ever in which generative AI-generated misinformation will pollute our information ecosystem like never before, leaving millions of voters unsure which images, audio and video they can trust,” said State Assemblywoman Gail Pellerin (D-Santa Cruz). “And we have to do something.”

Newsom is He indicated his intention to sign the bill.This will take effect immediately, in time for the November elections.

The bill would amend California law that bans distributing false audio or visual media with the intent to disparage a candidate's reputation or deceive voters within 60 days of an election. State lawmakers say the law needs to be strengthened during an election season when digitally altered videos and photos, known as deepfakes, are already rampant on social media.

The spread of disinformation through deepfakes has been a concern for lawmakers and regulators during past elections, heightened by the emergence of new AI-powered tools, such as chatbots, that can quickly generate images and videos. From fake automated phone calls to fake celebrity endorsements of candidates, AI-generated content has posed a challenge for tech platforms and lawmakers.

Under AB 2839, candidates, election boards and election officials can seek court orders to remove deepfakes and can also sue for damages against those who distribute or republish the false material.

The law also applies to false media posted within 60 days after an election, including content that falsely portrays voting machines, ballots, polling places, or other election-related property in a way that could undermine confidence in the election results.

However, it does not apply to works that are classified as satire or parody, or where the broadcaster informs viewers that the material depicted is not an accurate representation of speech or events.

Tech industry groups have opposed AB 2839 as well as other bills that would target online platforms for not adequately moderating false election content or labeling AI-generated content.

“Constitutionally protected free speech will be stifled and blocked,” said Carl Szabo, vice president and general counsel of NetChoice, whose members include tech giants Google, X and Snap as well as Facebook parent Meta.

Online platforms have their own rules regarding manipulated media and political advertising, but policies may vary.

Unlike Meta and X, TikTok does not allow political ads and may remove even those that are labeled. AI-Generated Content or if it depicts a celebrity or other public figure, or “is used for political or commercial endorsement.” Truth Social, the platform founded by Trump, does not mention manipulated media in its rules about what is not allowed on its platform.

Federal and state regulators have already begun cracking down on AI-generated content.

In May, the Federal Communications Commission proposed a $6 million fine against Steve Cramer, a Democratic political consultant who used AI to make robocalls to imitate President Biden's voice. The fake calls prevented him from participating in New Hampshire's Democratic presidential primary in January. NBC News He organized the call to draw attention to the dangers of AI in politics, but is facing criminal charges of felony voter suppression and misdemeanor impersonation of a candidate.

Szabo said current laws are sufficient to address concerns about deep fakes in elections. NetChoice is suing states, arguing that some laws aimed at protecting children on social media violate free speech protections under the First Amendment.

“You can't stop bad behavior just by making new laws; you have to actually enforce the laws,” Szabo said.

More than 20 states, including Washington, Arizona and Oregon, have enacted, passed or are working on enacting laws to regulate deepfakes, according to consumer advocacy groups. Public Citizen.

California enacted a law aimed at combating manipulated media in 2019 after a video showing House Speaker Nancy Pelosi appearing intoxicated went viral on social media. Enforcement of the law has been a challenge.

“We needed to weaken it,” said Rep. Marc Berman (D-Menlo Park), the bill's author. invoice“A lot of attention was focused on the potential risks of this technology, but ultimately I was worried that it might not do much good.”

Rather than taking legal action, political candidates may choose to expose deepfakes or ignore them to limit their spread, said Daniel Citron, a professor at the University of Virginia Law School. By the time they can get through the courts, the content may already be spreading.

“These laws are important because of the message they send. They teach us something,” she said, adding that they let people who share deepfakes know there will be costs.

This year, lawmakers worked with the California Technology and Democracy Initiative, a project of the nonprofit California Common Cause, on several bills to address political deepfakes.

Some target online platforms, which are exempt under federal law from liability for content posted by users.

Governor Berman introduced a bill that would require online platforms with at least 1 million users in California to remove or label certain false election-related content within 120 days of the election, and the platforms must take action within 72 hours of a user reporting the post. AB2655The bill passed by Parliament on Wednesday would also require platforms to have procedures in place to identify, remove and label false content, and would not apply to parody, satire or news media that meet certain requirements.

Another bill, co-authored by Rep. Buffy Wicks (D-Oakland), would require online platforms to label AI-generated content. NetChoice and TechNet, another industry group, oppose the bill, but ChatGPT maker OpenAI supports it. AB3211, Reuters Reported.

But both bills are not scheduled to take effect until after the election, highlighting how difficult it can be to pass new laws as technology rapidly advances.

“Part of my hope in introducing this bill is the attention it will generate and put pressure on social media platforms to act now,” Berman said.

Share this post: