April 20, 2024

Politicians around the world blame AI to reject accusations

Artificial intelligence experts have long warned that AI-generated content could muddy the waters of perceived reality. With weeks to go until a crucial election year, confusion over AI is growing.

Politicians around the world have been dismissing potentially damning evidence — grainy videos of hotel dates, voice recordings criticizing political opponents — by dismissing them as AI-generated deepfakes. At the same time, AI deepfakes are being used to spread misinformation.

On Monday, the New Hampshire Department of Justice said it was investigating robocalls that featured what appeared to be an AI-generated voice that sounded like President Biden telling voters to skip Tuesday’s primary — the first notable use. of AI for voter suppression in this campaign cycle.

Last month, former President Donald Trump dismissed an ad on Fox News that featured video of his well-documented public gaffes, including his difficulty pronouncing the word “anonymous” in Montana and his visit to the California town of “Pleasure.” , also known as Paradise. both in 2018, claiming that the images were generated by AI.

“The perverts and losers of the failed and once dissolved Lincoln Project, and others, are using AI (Artificial Intelligence) in their fake television commercials to make me look as bad and pathetic as corrupt Joe Biden, not an easy thing to do. ”Trump wrote in Truth Social. “FoxNews should not be running these ads.”

The Lincoln Project, a political action committee formed by moderate Republicans to oppose Trump, quickly denied the claim; The ad featured incidents during Trump’s presidency that were widely covered at the time and witnessed in real life by many independent observers.

Still, AI creates a “liar’s dividend,” said Hany Farid, a professor at the University of California, Berkeley who studies digital propaganda and disinformation. “When you actually catch a police officer or a politician saying something horrible, they have plausible deniability” in the age of AI.

AI “destabilizes the concept of truth itself,” added Libby Lange, an analyst at the disinformation tracking organization Graphika. “If everything could be fake, and if everyone claims that everything is fake or manipulated in some way, there really is no sense of truth on the ground. “Politically motivated actors, especially, can adopt any interpretation they choose.”

Trump is not the only one taking advantage of this advantage. Around the world, AI is becoming a common scapegoat for politicians trying to defend themselves against damaging accusations.

Late last year, a grainy video emerged of a Taiwanese ruling party politician entering a hotel with a woman, indicating he was having an affair. Commentators and other politicians quickly came to his defense, saying the images were generated by AI, although it is unclear if they actually were.

In April, a 26-second voice recording was leaked in which a politician from the southern Indian state of Tamil Nadu appeared to accuse his own party of illegally amassing $3.6 billion, according to a report by Rest of World. The politician denied the veracity of the recording, calling it “machine-generated”; Experts have said they are not sure if the audio is real or fake.

AI companies have generally said their tools should not be used in political campaigns now, but enforcement has been spotty. On Friday, OpenAI banned a developer from using its tools after the developer built a bot that imitated Democratic presidential candidate Dean Phillips. Phillips’ campaign had supported the bot, but after The Washington Post reported on it, OpenAI deemed it to have violated rules against using its technology for campaigns.

The confusion surrounding AI also goes beyond politics. Last week, social media users began circulating an audio clip they claimed was a Baltimore County, Maryland, school principal going on a racist tirade against Jews and black students. The union representing the director has said the audio is generated by AI.

Several signs point to that conclusion, including the even cadence of the speech and hints of splicing, said Farid, who analyzed the audio. But without knowing where it came from or in what context it was recorded, he said, it’s impossible to say for sure.

On social media, commenters appear to overwhelmingly believe the audio is real, and the school district says it has launched an investigation. A request for comment to the principal through his union was not returned.

These claims carry weight because AI deepfakes are more common now and better replicate a person’s voice and appearance. Deepfakes regularly go viral on X, Facebook and other social platforms. Meanwhile, the tools and methods for identifying AI-created media are not keeping pace with the rapid advances in AI’s ability to generate such content.

Fake real images of Trump have gone viral several times. Earlier this month, actor Mark Ruffalo posted AI images of Trump with teenagers, claiming the images showed the former president on a private plane owned by convicted sex offender Jeffrey Epstein. Ruffalo later apologized.

Trump, who has spent weeks criticizing AI on Truth Social, posted about the incident saying, “This is AI and it is very dangerous for our country!”

Growing concern about the impact of AI on global politics and the economy was a major theme at the global leaders and CEOs conference in Davos, Switzerland, last week. In her opening remarks at the conference, Swiss President Viola Amherd called AI-generated propaganda and lies “a real threat” to global stability, “especially today, when the rapid development of artificial intelligence contributes to the growing credibility of this type of fake news.”

Technology and social media companies say they are studying creating systems to automatically verify and moderate AI-generated content that purports to be real, but they have not done so yet. Meanwhile, only experts have the technology and experience to analyze media and determine whether it is real or fake.

That leaves very few people capable of creating truth squad content that can now be generated with easy-to-use AI tools available to almost anyone.

“It is not necessary to be a computer scientist. You don’t need to know how to code,” Farid said. “There are no barriers to entry anymore.”

Aviv Ovadya, an expert on AI’s impact on democracy and an affiliate of Harvard University’s Berkman Klein Center, said the general public is much more aware of AI deepfakes now than they were five years ago. As politicians see others evade criticism by claiming that the evidence published against them is AI, more people will make that claim.

“There is a contagion effect,” he said, pointing to a similar rise in politicians falsely calling an election rigged.

Ovadya said tech companies have the tools to regulate the problem: They could watermark audio to create a digital footprint or join a coalition aimed at preventing the spread of misleading information online by developing technical standards that Establish the origins of media content. Most importantly, he said, they could modify their algorithms to not promote sensational but potentially false content.

So far, he said, technology companies have mostly taken no steps to safeguard the public’s perception of reality.

“As long as the incentives remain engagement-driven sensationalism and really conflict,” he said, “that’s the kind of content, whether deepfake or not, that will come to light.”

Drew Harwell and Nitasha Tiku contributed to this report.

Leave a Reply

Your email address will not be published. Required fields are marked *