In the midst of the 2024 U.S. presidential elections, voters were exposed to a lot of promotional political content: emails asking for donations, texts to get involved with campaigns, and billboards and advertisements promoting candidates. Yet, they also received something unusual, something previous election cycles have not had to deal with. In New Hampshire, voters picked up the phone to hear candidate Joe Biden’s voice, but in reality, it wasn’t Joe Biden at all, rather opposition messaging trying to convince constituents not to cast a vote in the primary election. This is just one example of how artificial intelligence technology, or AI, has started to influence elections across the United States. What happened in New Hampshire isn’t an anomaly, it represents a more serious concern for American democracy today, as analysis from TIME reveals that AI models have rapidly evolved and present information that is based on factual uncertainties and is not held to the same standard as other campaigns and mainstream media. The implications for elections are profound: a single fabricated video released days before voting could mislead millions before fact-checkers even begin to respond.
While there has not been much legislative response to this issue thus far, we have started to see lawmakers formulate some policies to address the topic. In California, state legislators responded to incidents like AI robocalls telling voters that their voting place has changed by passing AB 2839 in 2024. This bill banned AI-generated or digitally altered political communications that had misleading information in certain time periods before and after an election, as well as requiring content to be labelled as manipulated by artificial intelligence if it was going to be used as satire or parody, categories that were allowed. Although the bill was signed into law, it soon faced legal concerns. Conservative social commentator Christopher Kohls sued the state, arguing that the law violated the First Amendment. In federal court, the ruling was in favor of Kohls, and AB2839 was halted from implementation, largely due to the argument that it was too broad and discriminated based on content. This raises an important legal question: do laws that restrict the posting of AI-generated deepfakes, especially relating to elections, violate the First Amendment?
While this is an evolving issue, California’s AB 2839 should be upheld because it survives strict scrutiny: preventing voter deception through AI-generated deepfakes is a compelling state interest, and the law is narrowly tailored to regulate only intentional synthetic impersonation while preserving all meaningful avenues for political expression. As generative AI creates unprecedented opportunities for electoral manipulation, the Constitution does not require states to remain powerless to protect the integrity of democratic decision-making.
While the court struck down this law, it noted that the government has a compelling interest in protecting election integrity and their interest was not in regulating specific speech (Kohls v. Bonta). The core debate comes down to if the law is (1) tailored narrowly to fit this interest and (2) not regulating specific expression. In cases like this one, centered around free speech, laws must pass the test of strict scrutiny, meaning that the government must prove that its restrictions are necessary to advance a compelling state interest and is narrowly tailored to achieve that interest. This framework is essential because courts treat regulations affecting political speech with great skepticism, presuming them unconstitutional unless the state can justify both the magnitude of the harm and the precision of the remedy.
First, this law has the necessary scope to carry out the government’s interests due to the fact that it regulates medium, not content. Although political speech is afforded the highest level of protection, deliberately deceptive speech, particularly fraud, impersonation, and misrepresentation, has long been subject to regulation. In Illinois ex rel. Madigan v. Telemarketing Associates, Inc. (2003), the Supreme Court reaffirmed that intentionally misleading speech may be restricted because it causes concrete societal harm. Deepfakes fall in a similar category: they do not merely convey a viewpoint but fabricate the appearance of a real person saying or doing something they never did. The state’s target is thus not political disagreement but synthetic impersonation, a form of deception historically regulable even when it has expressive effects. The law does not target specific viewpoints, but rather, an entire medium of expression, making it content-neutral, as shown by the legislative definition of a deepfake centering around “media that is digitally created or modified such that it would falsely appear to a reasonable person to be an authentic record of the actual speech or conduct of the individual depicted…,” not any specific content. Past legal precedent shows that when modes of speech rather than specific content is regulated, such measures can be implemented. For instance, in the 1992 case Burson v. Freeman, the Supreme Court upheld a Tennessee law that created a 100-foot buffer zone around polling places where campaigning was prohibited. This similarly centered around political expression, but because it was in state interest and focused on a medium of speech, it was upheld.
Even if AB 2839 were deemed content-based, it survives strict scrutiny because California’s interest in preventing voter deception is one of the strongest the Court has ever recognized. Multiple decisions confirm that safeguarding elections is a compelling state interest warranting strong regulatory measures. In Burson, the Court emphasized that preventing voter intimidation and confusion was a rare case where even a content-based restriction on political speech was justified. Minnesota Voters Alliance v. Mansky (2018) reaffirmed that states have broad authority to maintain order and protect voters from misleading or manipulative influences, especially in relation to elections. Deepfakes pose a threat that encompasses all the harms identified in these cases, namely confusion, manipulation, and distortion of political choice, but at a scale unmatched by any prior medium.
Having established that the state’s interest is compelling, the next question is whether AB 2839 is narrowly tailored, meaning it burdens no more speech than necessary. Here, the statute is precise. It applies only during defined election windows, the period when deceptive content is most likely to cause irreparable harm. It also contains an intent requirement, ensuring that only those who knowingly distribute deceptive deepfakes are subject to regulation. This is a critical safeguard: political satire, inadvertent sharing, artistic uses, and commentary are all outside the statute’s reach. The law further exempts satire and parody entirely, provided they are labeled, a minimal and reasonable requirement that enables viewers to distinguish expressive exaggeration from fabricated reality. These features reflect careful tailoring that courts typically look for in upholding restrictions under strict scrutiny. Without that provision, the lines are too blurred between satire and political information, as political actors could justify anything as satire, defeating the purpose of any regulation.
Critics contend that regulating misinformation inherently regulates content. But AB 2839’s focus on synthetic impersonation makes it more analogous to regulating conduct. In United States v. O’Brien (1968), the Court upheld a regulation on burning draft cards, finding that conduct containing expressive elements can still be regulated when the government’s interest is unrelated to suppressing expression. Similarly, AB 2839 regulates the use of AI to fabricate another person’s identity, not their political ideas. The law applies irrespective of the viewpoint expressed in the deepfake. A falsified endorsement, fabricated policy stance, or invented scandal would all fall under the statute not because of the content of the speech but because of the deceptive method used to create it. The harm arises not from political persuasion but from the obliteration of the boundary between reality and fabrication.
In conclusion, this will not be the last legal battle around free speech and artificial intelligence, which makes it especially important to examine the nuance of laws like the one in California. AI-generated deepfakes present a novel challenge to democratic processes, enabling realistic and scalable political impersonation that traditional First Amendment doctrine never took into consideration. California’s AB 2839 represents a narrowly tailored and constitutionally permissible response: it targets only deceptive, intentional synthetic impersonation during limited election periods and leaves untouched the robust sphere of political discourse. While it is a thorough law, it is necessary in order to actually address the issue without creating easy loopholes. Far from violating the First Amendment, AB 2839 exemplifies the type of principled regulation that strict scrutiny is designed to permit: laws that protect compelling state interests while burdening no more speech than necessary. As generative AI rapidly changes the information landscape, constitutional interpretation must continue to protect the state’s power in order to safeguard the foundational integrity of elections.


