Synthetic faces and voices are being used to shape elections, sell products, and manipulate public trust, leaving regulators scrambling to catch up.
Artificial intelligence has revolutionized various industries, including marketing and political campaigning. AI-generated actors, voiceovers, and deepfake-style media have been leveraged by major corporations to personalize brand experiences and by political campaigns to influence public perception. While AI offers groundbreaking possibilities for efficiency, creativity, and engagement, it also raises pressing ethical and legal concerns.
In commercial advertising, AI-generated actors have been used to revive deceased celebrities, create multilingual marketing campaigns, and develop hyper-realistic virtual influencers. However, in political campaigns, AI-generated content has been deployed to manipulate voter perceptions, misrepresent candidates, and spread misinformation at an unprecedented scale. The lack of clear regulatory frameworks governing AI-generated content in political communication has led to growing concerns over electoral integrity.
AI-generated content has enabled brands to revive deceased celebrities and personalize ad campaigns. One of the most striking examples occurred in December 2024 when Still G.I.N. released an advertisement featuring AI-generated versions of Frank Sinatra and Sammy Davis Jr. interacting with Dr. Dre and Snoop Dogg. The campaign blended nostalgia with innovation, leading to debates over posthumous endorsements and ethical boundaries in AI-driven marketing.
Another controversy arose when pop icon Rihanna discovered that an AI-generated voice mimicry of her was being used in a viral Instagram video titled Rihanna’s Most Expensive Purchases. Initially believing it was her real voice, she later condemned the misuse of AI for unauthorized celebrity endorsements. Such cases highlight AI’s potential to distort reality and erode trust in digital media.
AI has also been exploited to deceive consumers, as seen in a scam using Whoopi Goldberg’s AI-generated likeness to promote fraudulent weight-loss products. Goldberg publicly denounced the scam on The View, cautioning audiences against believing AI-generated endorsements unless explicitly stated by the individual. This case underscores the need for stronger regulatory oversight in AI-driven consumer marketing.
AI-generated manipulation has raised concerns about election integrity. A controversial Wisconsin Supreme Court race ad used AI to alter an image of liberal candidate Susan Crawford, changing her facial expression to make her appear more stern. The ad, funded by opponent Brad Schimel’s campaign, was accused of violating state disclosure laws regarding AI-generated political content. While Schimel’s team admitted to modifying the image, they denied using AI, reflecting a growing trend of ambiguous ethical boundaries in AI-driven political advertising.
Here are some more political examples:
While AI-driven messaging offers efficiency and personalization, its potential for manipulation threatens the integrity of democratic elections. Political actors can now use AI to fabricate images, alter videos, and craft misleading narratives that distort reality, making it increasingly difficult for voters to distinguish fact from fiction. The rise of AI-generated attack ads, deepfake endorsements, and automated propaganda raises urgent concerns about misinformation, voter deception, and electoral manipulation.
As AI-generated content becomes more prevalent in political campaigns, lawmakers at both the state and federal levels are working to regulate its use to prevent voter manipulation and misinformation. Currently, 16 states have enacted laws governing AI in political ads, while another 16 states are considering similar legislation. These regulations primarily focus on mandatory disclosure, ensuring that AI-generated political content is labeled to inform voters when synthetic media is being used.
At the federal level, legislators have introduced bills such as the Protected Elections from Deceptive AI Act and the AI Transparency in Elections Act of 2024. The former aims to prohibit the distribution of AI-generated content that materially misleads voters, while the latter requires disclaimers on AI-generated political advertisements to ensure transparency. The Federal Communications Commission (FCC) has also proposed rules mandating that broadcast stations disclose AI-generated content in political ads, reinforcing efforts to safeguard election integrity.
Despite these efforts, legal challenges persist, particularly regarding the First Amendment. Free speech advocates warn that overly restrictive regulations could violate constitutional protections, leading to lawsuits and ongoing debates about how best to balance misinformation prevention with free expression. As AI technology continues to evolve, lawmakers face the challenge of developing regulatory frameworks that both protect voters from deception and uphold democratic freedoms.
In the 2024 U.S. elections, AI tools tailored political messages to individual voters, while startups like BattlegroundAI produced hundreds of campaign ads in minutes, raising concerns about misinformation and manipulation. In Wisconsin, an attack ad altered an image of Supreme Court candidate Susan Crawford, sparking debates over AI disclosure laws. Meanwhile, California’s deepfake election law has already faced legal challenges, with critics arguing it infringes on free speech rights.
As AI-generated media continues to influence public opinion at an unprecedented scale, the lack of robust regulations threatens democratic integrity. Without mandatory AI disclosure labels, stricter legal oversight, and advanced fact-checking mechanisms, voters risk being misled by AI-powered propaganda, further eroding trust in elections and democratic institutions.