Truth, Lies, and Code: The Fight to Control AI in Politics
Federal Judge Strikes Down California Laws Targeting AI in Political Ads
A federal judge has struck down two California laws designed to limit the use of artificial intelligence (AI) in political advertising, ruling that the measures violate constitutional protections on free speech and conflict with federal law.
Senior U.S. District Judge John Mendez, appointed by President George W. Bush, issued the ruling on Friday, blocking enforcement of Assembly Bill 2839 and Assembly Bill 2655. The two laws were passed in 2024 in response to growing concerns about the use of deepfakes and AI-generated disinformation in elections.
In his decision, Judge Mendez acknowledged the potential harm AI-generated content can pose to democratic processes, but emphasized that the government cannot suppress speech in an attempt to combat those risks.
“To be sure, deepfakes and artificially manipulated media arguably pose significant risks to electoral integrity,” Mendez wrote in his opinion. “But the challenges launched by digital content on a global scale cannot be quashed through censorship or legislative fiat. Just as the government may not dictate the canon of comedy, California cannot preemptively sterilize political content.”
AI and the First Amendment
Assembly Bill 2839 specifically targeted AI-generated disinformation and deepfake media in political content. The law sought to ban the distribution of such material in political ads—including mailers, TV spots, robocalls, texts, and radio—within 120 days before an election. For content related to voting systems or election officials, the ban would have extended to 60 days after the election.
The bill also required candidates who used AI to depict themselves saying or doing things they didn’t actually say or do to disclose that the content was manipulated. Additionally, it provided a legal path for quick injunctive relief to halt dissemination of violating content.
The law was challenged by several plaintiffs, including satirical news outlet The Babylon Bee and various online content creators, who argued the law infringed on their First Amendment rights. The judge agreed, noting that while the intentions of the law may be admirable, its restrictions on speech were too broad to be constitutionally valid.
Section 230 and Platform Liability
The second law, Assembly Bill 2655, was aimed at tech platforms. It required online platforms to remove AI-generated political disinformation upon request. However, Judge Mendez had previously ruled that this law conflicted with Section 230 of the federal Communications Decency Act, which shields internet platforms from liability for user-generated content.
Tech platforms X (formerly Twitter) and Rumble were among the plaintiffs challenging this measure. They argued that California’s law forced them to police political content in ways that federal law explicitly prevents.
By granting judgment against both AB 2839 and AB 2655, the court effectively blocked California from being the first state to impose legal restrictions on AI-generated political media ahead of the 2026 election cycle.
California’s Legislative Intent
Governor Gavin Newsom signed both laws into effect in September 2024, citing the urgent need to guard against AI-driven disinformation in a rapidly evolving media landscape.
“It’s critical that we ensure AI is not deployed to undermine the public’s trust through disinformation—especially in today’s fraught political climate,” Newsom said at the time.
In a promotional “fact sheet” for AB 2839, the bill’s sponsors described the emerging risks posed by generative AI. They warned that the technology allows bad actors—foreign governments, conspiracy theorists, and even candidates themselves—to create fake images or videos that appear authentic, potentially misleading voters and disrupting democratic elections.
“With a few clicks, current technology allows someone to produce a fake video of an election official claiming voting machines are rigged, or a deepfake of a candidate accepting a bribe,” the document warned. “This kind of content can deeply influence voter behavior and damage public trust in elections.”
The bill’s supporters argued that the law was narrowly tailored to protect voters and the integrity of the democratic process without violating constitutional rights. However, Judge Mendez found that the law’s restrictions on speech were not sufficiently narrow and risked chilling legitimate political expression.
Looking Ahead
The ruling underscores the legal complexity surrounding attempts to regulate AI in political contexts. While many agree that AI-generated disinformation poses real threats to democracy, courts continue to prioritize First Amendment protections when evaluating such laws.
As the 2026 election approaches, concerns over the role of generative AI in shaping public opinion remain unresolved, leaving it up to voters, platforms, and watchdog organizations to combat digital misinformation without the backing of state regulation—for now.