Potential Ban on Parodies and Political Cartoons: The “No AI Fraud Act” Could Have a Chilling Effect

Blending modern technology and fresh laws is invariably a challenging ordeal, particularly when the technology in question pertains to communication. Legislators frequently introduce bills that could broadly encompass various forms of speech that are protected by the First Amendment. This trend has been particularly evident with regards to social media and is also now emerging in the realm of artificial intelligence. The No Artificial Intelligence Fake Replicas And Unauthorized Duplications (No AI FRAUD) Act epitomizes this issue. This bill, under the guise of safeguarding “Americans’ individual right to their likeness and voice,” proposes to limit a wide array of content, potentially including parody videos, comedic impressions, political cartoons, and other material. According to the bill’s sponsors, Reps. María Elvira Salazar (R-Fla.) and Madeleine Dean (D-Pa.), they are concerned about “AI-generated fakes and forgeries” and intend to define these likenesses and voices as the intellectual property of each individual.

While the bill references instances of AI being used to create deceptive advertisements and music, its reach extends much further, encompassing a broad range of “digital depictions” and “digital voice replicas.” Salazar and Dean assert that the bill strives to balance people’s “right to control the use of their identifying characteristics” with “First Amendment protections to safeguard speech and innovation.” However, despite recognizing free speech rights, the bill augments the types of speech that can be legally restricted, potentially leading to increased legal complications for creators and platforms interested in exercising their First Amendment rights. Moreover, this could result in a chilling effect on particular forms of comedy, commentary, and artistic expression.

At its core, the No AI Fraud Act is centered on creating a right to litigate against individuals who use another’s likeness or voice without consent. It establishes that “every individual has a property right in their own likeness and voice” and requires written consent, legal representation, and governance by a collective bargaining agreement for individuals to use someone’s “digital depiction or digital voice replica” in a manner affecting interstate or foreign commerce. This provision, seemingly limited to interstate or foreign commerce, effectively encompasses nearly anything involving the internet. The bill defines “digital depiction” as any replica, imitation, or likeness of an individual created or altered through digital technology, extending to living or deceased individuals. It also expands to audio renderings created or altered using digital technology, as well as audio-visual content, regardless of actual or simulated representations.

Furthermore, the bill could ensnare anyone involved in the distribution or publication of such content with a potential lawsuit. The lawmakers have inserted a section acknowledging First Amendment protections as a defense against violation. Nevertheless, this is not entirely reassuring, as the bill implies an expansion of categories of speech that are unprotected by the First Amendment. Accordingly, it allocates intellectual property rights to likenesses and voices, potentially allowing restrictions on depictions of another’s likeness or voice.

This bill could lead to an influx of takedown requests for content that may be deemed a potential infringement, including parody, impressions, and various forms of artistic expression. It could trigger more prohibitions on parody accounts, aiming to shield platforms and creators from legal repercussions. While the bill suggests imitators are not liable if the harm caused is insignificant, the potential for emotional distress as a form of harm remains a subjective notion. It further classifies certain content categories as inherently harmful, including intimate images and sexually explicit material, leaving no room for dispute about whether the depicted images were truly harmful. Although proponents may argue that the bill targets deepfake pornography, the broad language could also affect a wide range of content, including erotic art and political commentary that evokes intimacy between public figures.