AI poses political danger by 2024 with threat to mislead voters



Computer engineers and tech-savvy political scientists have warned for years that cheap and powerful artificial intelligence tools would soon allow anyone to create fake images, video, and audio that are realistic enough to mislead voters and perhaps influence an election.

The synthetic images that emerged were often crude, unconvincing, and expensive to produce, especially when other kinds of misinformation were so cheap and easy to spread on social media. The threat posed by AI and so-called deepfakes always seemed to be a year or two away.

No more.

Sophisticated generative AI tools can now create cloned human voices and hyper-realistic images, video and audio in seconds, at minimal cost. When connected to powerful social media algorithms, this fake and digitally created content can spread far and fast and target very specific audiences, potentially taking campaign dirty tricks to a new level.

The implications for the 2024 campaigns and elections are as big as they are concerning: Not only can generative AI quickly produce targeted campaign emails, texts or videos, but it could also be used to mislead voters, impersonate candidates and undermine elections on a large scale and at a speed not yet seen.

“We are not prepared for this,” warned AJ Nash, vice president of intelligence at cybersecurity firm ZeroFox. ”For me, the big leap forward is the audio and video capabilities that have come out. When you can do that on a large scale and distribute it on social platforms, well, it’s going to have a big impact.”

AI experts can quickly list a number of alarming scenarios in which generative AI is used to create synthetic media in order to confuse voters, smear a candidate, or even incite violence.

Here are a few: Automated robocall messages, in the voice of a candidate, instructing voters to cast their ballots on the wrong date; audio recordings of a candidate allegedly confessing to a crime or expressing racist views; video footage showing someone giving a speech or interview they never gave. Fake images designed to look like local news reports, falsely claiming that a candidate dropped out of the race.

“What if Elon Musk calls you personally and tells you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI, who stepped down last year to start the nonprofit AI2. “A lot of people would listen. But it’s not him.

Former President Donald Trump, who is running for 2024, has shared AI-generated content with his followers on social media. A doctored video of CNN host Anderson Cooper that Trump shared on his Truth Social platform on Friday was created that distorted Cooper’s reaction to CNN’s town hall last week with Trump, using an AI voice cloning tool.

A dystopian campaign ad released last month by the Republican National Committee offers another glimpse into this digitally manipulated future. The online ad, which appeared after President Joe Biden announced his re-election campaign, begins with a bizarre and slightly distorted image of Biden and the text “What if the weakest president we’ve ever had was re-elected?”

Below is a series of AI-generated images: Taiwan under attack; they boarded up storefronts in the United States as the economy crumbles; soldiers and armored military vehicles patrol the local streets while tattooed criminals and waves of immigrants create panic.

“An AI-generated look at the possible future of the country if Joe Biden is re-elected in 2024,” the RNC ad description reads.

The RNC acknowledged its use of AI, but others, including nefarious political campaigns and foreign adversaries, will not, said Petko Stoyanov, global chief technology officer at Forcepoint, an Austin, Texas-based cybersecurity firm. Stoyanov predicted that groups seeking to meddle in American democracy will use artificial intelligence and synthetic media as a way to erode trust.

“What happens if an international entity, a cybercriminal or a nation state, impersonates someone? What is the impact? Do we have any recourse?” Stoyanov said. “We are going to see a lot more misinformation from international sources.”

AI-generated political misinformation has already gone viral online ahead of the 2024 election, from a doctored video of Biden appearing to give a speech attacking transgender people to AI-generated images of children allegedly learning Satanism in libraries. .

Artificial intelligence images that appear to show Trump’s mugshot also misled some social media users, even though the former president did not take one when he was booked and arraigned in Manhattan criminal court for falsifying business records. Other AI-generated images showed Trump resisting arrest, though his creator was quick to acknowledge his origin.

Legislation that would require candidates to label AI-created campaign ads was introduced to the House by Rep. Yvette Clarke, DN.Y., who also sponsored legislation that would require anyone creating synthetic images to add a watermark that state the fact.

Some states have offered their own proposals to address concerns about deepfakes.

Clarke said her biggest fear is that generative AI could be used before the 2024 election to create video or audio that incites violence and turns Americans against each other.

“It’s important that we keep up with technology,” Clarke told The Associated Press. “We have to install some railings. People can be fooled, and it only takes a split second. People are busy with their lives and don’t have time to verify every piece of information. AI as a weapon, in a political season, could be extremely disruptive.”

Earlier this month, a trade association for political consultants in Washington condemned the use of deep forgeries in political advertising, calling it a “hoax” that “has no place in legitimate and ethical campaigns.”

Other forms of artificial intelligence have been a feature of political campaigns for years, using data and algorithms to automate tasks like targeting voters on social media or tracking donors. Campaign strategists and tech entrepreneurs hope that the latest innovations will also offer some silver linings in 2024.

Mike Nellis, CEO of progressive digital agency Authentic, said he uses ChatGPT “every day” and encourages his staff to use it as well, as long as any content redacted with the tool is reviewed by human eyes afterward.

Nellis’s most recent project, in association with Higher Ground Labs, is an artificial intelligence tool called Quiller. You’ll write, send, and evaluate the effectiveness of fundraising emails, all typically tedious tasks in campaigns.

“The idea is that every Democratic strategist, every Democratic candidate will have a co-pilot in their pocket,” he said.

___

Swenson reported from New York.

___

The Associated Press receives support from several private foundations to improve its explanatory coverage of elections and democracy. See more about AP’s democracy initiative here. The AP is solely responsible for all content.

___

Follow AP’s coverage of misinformation at https://apnews.com/hub/misinformation and coverage of artificial intelligence at https://apnews.com/hub/artificial-intelligence

Leave a Comment