Researchers are concerned that advancements in new artificial intelligence (AI) applications could be used to mislead the voting public and potentially sway the outcomes of elections.
Recent advancements have allowed AI to respond to writing prompts in real-time, effectively mimicking human conversation. AI has also enabled people to create lifelike images, videos, and voiceovers, commonly referred to as deep fakes. As time progresses, these tools have only become more convincing, and some researchers now believe they could be used at scale in the upcoming 2024 U.S. election cycle.
While generative AI may help campaigns to rapidly produce targeted campaign emails, texts, and videos, they could also be used for more deceptive purposes like impersonating specific candidates.
Some scenarios researchers have considered include automated robocalls in a specific candidate’s voice, telling their supporters to get out to vote on the wrong date. Researchers have envisioned other potentially problematic scenarios, such as a faked newspaper headline informing readers that a particular candidate had dropped out of the race or faked audio impersonating an influential public figure.
“What if Elon Musk personally calls you and tells you to vote for a certain candidate?” said Oren Etzioni, the founding CEO of the Allen Institute for AI. “A lot of people would listen. But it’s not him.”
Deceptive actors could also produce relatively convincing faked audio impersonating a candidate as they confess to a crime or express a view that might alienate voters. On the inverse, a candidate might be able to plausibly deny an incriminating authentic recording by claiming it was faked. In any of the cases, the public could struggle to discern whether they’re getting the truth or a carefully constructed lie.
“To me, the big leap forward is the audio and video capabilities that have emerged,” said A.J. Nash, the vice president of intelligence at cybersecurity firm ZeroFox. “When you can do that on a large scale, and distribute it on social platforms, well, it’s going to have a major impact.”
AI tools have already been used to impersonate celebrities and political candidates.
Over the past year, a trend of using AI to impersonate the voices of U.S. presidents has grown into a running meme, with Presidents Bill Clinton, George W. Bush, Barack Obama, Donald Trump, and Joe Biden being depicted trash-talking with each other over intense online videogaming matches.
In March, after Trump announced he anticipated being criminally charged by Manhattan District Attorney Alvin Bragg, fake images began to circulate online showing Trump being handcuffed, chased, and tackled by police officers. At the time, The Atlantic’s Charlie Warzel remarked, “This *fake* trump arrest ai generated photo still looks over-stylized and fake and the hands are weird as per usual…but hard not to see how much better this is getting in a short amount of time.”
In April, after Democratic President Joe Biden announced he was running for reelection, the Republican National Committee (RNC) released an attack ad that used AI-generated images to depict a hypothetical future in which Biden’s reelection victory leads to a Chinese invasion of Taiwan, U.S. borders being overrun by illegal immigrants, and a crime wave in San Francisco. A description for the RNC ad reads, “An AI-generated look into the country’s possible future if Joe Biden is re-elected in 2024.”
While the RNC acknowledged its use of AI-generated images for the ad, cybersecurity analyst Petko Stoyanov warned that nefarious actors and hostile nation-states won’t be so forthcoming in the future.
“What happens if an international entity—a cybercriminal or a nation state—impersonates someone. What is the impact? Do we have any recourse?” Stoyanov said. “We’re going to see a lot more misinformation from international sources.”
Though many researchers see the potential nefarious misuses of AI, campaigns are leaning into the tools as a way to communicate more effectively with voters.
Mike Nellis, CEO of progressive digital agency Authentic, said he encourages his team to use AI text generator ChatGPT to help write campaign ads. Nellis said that as long as his staff reviews the content ChatGPT produces before they disseminate it, he doesn’t see a problem.
Nellis’s team is currently working with Higher Ground Labs, to develop an AI tool called Quiller that can generate, distribute, and evaluate the effectiveness of new campaign fundraising emails.
“The idea is every Democratic strategist, every Democratic candidate will have a copilot in their pocket,” Nellis said.
Lawmakers and regulatory agencies are already discussing ways to constrain AI to prevent its misuse.
Last month the the Civil Rights Division of the U.S. Department of Justice, the Consumer Financial Protection Bureau, the Federal Trade Commission (FTC), and the U.S. Equal Employment Opportunity Commission issued a joint statement warning that AI tools could be used to “perpetuate unlawful bias, automate unlawful discrimination, and produce other harmful outcomes” in the work force. FTC Chair Lina M. Khan said AI tools could “turbocharge fraud and automate discrimination.”
While policymakers are looking to constrain AI, some researchers have warned against overregulating the technology.
Jake Morabito, director of the Communications and Technology Task Force at the American Legislative Exchange Council, has warned that overregulation could stifle innovative AI technologies in their infancy.
“Innovators should have the legroom to experiment with these new technologies and find new applications,” Morabito told NTD News in a March interview. “One of the negative side effects of regulating too early is that it shuts down a lot of these avenues, whereas enterprises should really explore these avenues and help customers.”
The Associated Press contributed to this article
From NTD News