Google Requires Politic Parties to Disclose Use of AI in Election Ads
Â
Google is shaking up its political advertising game. Starting in November, politicians will have to disclose if they're using any "synthetic" or AI-generated images or videos in their ads on Google's platforms. The announcement was made in a recent blog post by the company.
This move comes as AI tools like Google's Bard chatbot and OpenAI's Dall-E image generator have become so advanced they can create content almost indistinguishable from reality. While Google already bans "deepfakes" designed to deceive voters, the new policy takes it a step further. Politicians will need to slap a label on their ads warning viewers about the synthetic content.
This policy is a timely response to escalating public anxieties over deepfakes and their potential impact on democratic processes. Yet, its efficacy is questionable. Security-wise, it feels akin to a bouncer checking IDs at a club that’s notorious for under-agers.
This change is driven by the "growing prevalence of tools that produce synthetic content," according to Google. The tech giant, along with Meta, has been under pressure to combat false claims on their platforms.
Fake images and audio are already making their way into global election campaigns. For instance, Florida Governor Ron DeSantis used fake images of Donald Trump and Anthony Fauci, and a Polish opposition party faked the voice of the country's prime minister in an ad.
However, the new rules won't apply to regular videos on YouTube. So, while Google is tightening the leash on political ads, user-generated content remains a wild west. The question now is, will these changes be enough to keep the integrity of political advertising intact?
Â
Â