Iran hacking Trump? AI deepfakes? Cyber side of 2024 election heats up.

|
Elizabeth Frantz/Reuters
Former President Donald Trump claimed that the Kamala Harris campaign used AI to fake her rally size. This Reuters photo shows Air Force Two, as supporters of Vice President Harris rally at Detroit Metropolitan Wayne County Airport in Romulus, Michigan, Aug. 7, 2024.
  • Quick Read
  • Deep Read ( 6 Min. )

Everybody knew artificial intelligence would play a role in this year’s election, but not quite this way.

On Sunday, Republican presidential nominee Donald Trump falsely claimed that Democratic opponent Kamala Harris had used AI tools to fabricate the size of crowds at her rallies.

Why We Wrote This

Recent days have seen false allegations of AI meddling, actual AI meddling, and reports of old-style hacking all involving the U.S. election campaign. Yet so far, this election’s cyberchaos may be less impactful than experts worried.

Whether it’s accusations of altering videos when they can be so easily disproved or surprise findings that AI-aided fake political news is having only mixed success, 2024 is not turning out the way cybersecurity specialists expected. AI influence campaigns were supposed to be smarter and subtler than what has happened so far in elections stretching from Indonesia to the United States.

Cybermeddlers are still making trouble. Yet they appear to be relying on traditional tactics more than on AI. In the latest example, Iranian hackers may have stolen information from the Trump campaign.

This week’s false claim by Mr. Trump and its amplification on social media highlight what some cybersecurity experts have long said: Although the use of AI deep fakes is growing, the best way to combat malign cyberinfluence in elections is to clamp down on its distribution.

Says Oren Etzioni, founder of TrueMedia.org, “It’s not the number of fakes [that matters]; it’s their impact.”

Everybody knew artificial intelligence would play a role in this year’s election, but not quite this way.

On Sunday, Republican presidential nominee Donald Trump falsely claimed that Democratic opponent Kamala Harris had used AI tools to fabricate the size of crowds at her rallies. Media outlets, including the local Fox TV affiliate that live-streamed a large Detroit airport event, debunked the former president’s social media post.

Whether it is candidates accusing opponents of altering videos when it can be so easily disproved or surprise findings that AI-aided fake political news is having only mixed success, 2024 is not turning out the way cybersecurity specialists expected. AI influence campaigns were supposed to be smarter and more subtle than what has happened so far in elections stretching from Indonesia to the United States.

Why We Wrote This

Recent days have seen false allegations of AI meddling, actual AI meddling, and reports of old-style hacking all involving the U.S. election campaign. Yet so far, this election’s cyberchaos may be less impactful than experts worried.

Cyber meddlers are still making trouble. Yet they appear to be relying on traditional tactics more than on AI. In the latest example, Iranian hackers may have stolen information from the Trump campaign.

This week’s false claim about Harris rally attendance and its amplification on social media highlight what some cybersecurity experts have long said: Although the use of AI deepfakes is growing, the best way to combat malign cyber influence in elections is to clamp down on its distribution.

“The one thing I want to fix? It’s the problem of the last 20 years: social media,” says Hany Farid, a professor at the University of California, Berkeley, and pioneer in digital forensics and image analysis. “If I could create deepfakes of Biden and Trump and all I could do was mail it to my five friends, that’s really different than if I can cover Twitter and YouTube and TikTok with it.” 

Two weeks ago, for example, tech billionaire Elon Musk grabbed headlines after he shared a video on his social media platform that used an AI voice-cloning tool to mimic the voice of Vice President Harris – saying things she hasn’t really said. He later said he assumed readers knew it was a parody.

“It’s not the number of fakes [that matters]; it’s their impact,” says Oren Etzioni, founder of TrueMedia.org. “One fake can have a major impact if it’s propagated widely via social media and people believe in it.” His nonprofit is offering media outlets and others a tool to spot deepfakes quickly.

While using AI can make creating fictional material much easier and quicker, there’s no guarantee it will have its intended effect.

Petr David Josek/AP
Fireworks explode during the 2024 Summer Olympics closing ceremony at the Stade de France, Aug. 12, 2024, in Saint-Denis, France. Pro-Russian propagandists tried to denigrate the games through stunts that included AI to recreate the voice of actor Tom Cruise. The real Mr. Cruise rappelled into the ceremony to a cheering crowd.

For instance, a report in April by the Microsoft Threat Analysis Center (MTAC) found that a Russian-influenced operation called Storm-1679 repeatedly used generative AI to try to undermine the Paris Olympics, but it failed. In an update Friday, MTAC identified a Chinese group that incorporated the technology, “but with limited to no impact.”

OpenAI, the company behind the popular chatbot ChatGPT, reached a similar conclusion in a report in May. It found that although influencers linked to Russia, China, and Iran used its tools to generate articles in various languages, create names and bios for social media accounts, and debug computer code, among other activities, they had not “meaningfully increased audience engagement or reach.”

These failures may explain why foreign influencers have returned to more tried-and-true techniques. “We’ve seen nearly all actors seek to incorporate AI into their content in their operations, but more recently, many actors have pivoted back to techniques that have proven effective in the past,” according to the MTAC report released Friday.

Consider Mr. Trump's allegation this past weekend that Iranian hackers, who may have been conducting a traditional cyberattack known as spear-phishing, had stolen internal documents from his campaign. He was apparently referring to Friday’s MTAC report, which singled out an Islamic Revolutionary Guard Corps unit that recently used a compromised account of a former political adviser to email “a high-ranking official of a presidential campaign.”

The email included a fake forwarding address with a link to a site controlled by the unit, according to the report. In July, the political news website Politico began receiving internal Trump campaign documents from an anonymous source, including a 271-page dossier of publicly available information on Ohio Sen. JD Vance, the GOP vice-presidential nominee.

Now the FBI is investigating an alleged Iranian attack on the Democratic presidential campaign as well as the one on the GOP.

Jim Urquhart/Reuters
Republican presidential nominee Donald Trump attends a campaign rally in Bozeman, Montana, August 9, 2024. This week's false claim by the former U.S. President spread quickly, highlighting the notion that the best way to combat cyber fraud is to clamp down on its distribution.

“Over the past several months, we have seen the emergence of significant influence activity by Iranian actors,” the MTAC report says.

While social media poses a bigger problem,  the number of AI political deepfakes continues to increase around the world, especially during campaign seasons.

“India, Pakistan, Taiwan, Indonesia, Mexico … in each of these elections, every single one, we’ve seen deepfakes, and they’ve become increasingly persuasive,” says Mr. Etzioni of TrueMedia.org. 

In India this spring, police arrested people from two opposition parties after a video falsely showed the home secretary saying that the government would end an affirmative-action jobs program for disadvantaged castes.

Deepfakes aren’t always malign. In Pakistan, supporters of an opposition party used deepfakes of their jailed leader, with his permission, to appeal to voters, who gave the party a plurality. It was a historic outcome, even though the party backed by the military eventually formed a coalition government with another opposition party.

Here in the United States, the motives have also been mixed. In January, before the New Hampshire primary, a political consultant paid for a deep fake of President Joe Biden discouraging voters from going to the polls. The consultant, a Democrat, claimed he did it to alert his party to the dangers of AI. Nevertheless, the Federal Communications Commission has since proposed fining him $6 million, and New Hampshire has indicted him on 32 counts related to election interference.

Elizabeth Frantz/Reuters
A deepfake video of U.S. President Joe Biden, posted on X, appeared to show him cursing his critics after he announced he would not seek reelection from the Oval Office of the White House in Washington, July 24, 2024.

Last month, a deepfake appeared to show Mr. Biden swearing at viewers during his televised speech ending his reelection campaign. The video went viral on the social media platform X (formerly Twitter). 

One reason for spreading such content is to damage a candidate’s reputation. Another more subtle aim is “polarizing the already divided segments of the society,” says Siwei Lyu, co-director of the Center for Information Integrity at the University at Buffalo.

Increasing this divisiveness often appears to be the aim of influence campaigns by foreign nations. In this election cycle, “we expect Iranian actors will employ cyberattacks against institutions and candidates while simultaneously intensifying their efforts to amplify existing divisive issues within the U.S., like racial tensions, economic disparities, and gender-related issues,” MTAC said in its report.

Companies and governments are beginning to respond to such threats.

At a Munich conference in February, Google, Meta, OpenAI, X, and 16 other large tech companies committed to the Tech Accord to Combat Deceptive Use of AI in 2024 Elections. Among other points, the accord requires companies to detect and address such content on their platforms. In May, OpenAI announced it had closed the accounts of the Russian, Chinese, and Iranian influencers it had detected. 

While democracy advocates applauded the move, many say industry self-regulation isn’t enough. The European Commission actively investigates potential failures of platforms like Facebook and Instagram.  

Mr. Farid at Berkeley sees a big change from 20 years ago.

“Now there is an awareness that ... the government does have to step in,” he says.  

In the U.S. the magnitude of the shift is an open question. In June, the U.S. Supreme Court declined to rule on whether the White House and federal agencies can push social media companies to remove content the federal government deems misinformation. The Justices said the plaintiffs lacked standing to bring the case.

The result: Although some federal oversight on misinformation may continue for now, debate over whether this violates constitutional free-speech protections is unsettled.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Iran hacking Trump? AI deepfakes? Cyber side of 2024 election heats up.
Read this article in
https://www.csmonitor.com/USA/Society/2024/0813/trump-harris-iran-hacking-ai-deepfakes
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe