What is Spamouflage? How a Chinese firm uses fake accounts to confuse US voters.
A network of fake online accounts, linked to a Chinese disinformation campaign, mimic Americans to spread fake or inflammatory news about US politics.
Eric Risberg/AP
Washington
When he first emerged on social media, the user known as Harlan claimed to be a New Yorker and an Army veteran who supported Donald Trump for president. Harlan said he was 29, and his profile picture showed a smiling, handsome young man.
A few months later, Harlan underwent a transformation. Now, he claimed to be 31 and from Florida.
New research into Chinese disinformation networks targeting American voters shows Harlan’s claims were as fictitious as his profile picture, which analysts think was created using artificial intelligence.
As voters prepare to cast their ballots this fall, China has been making its own plans, cultivating networks of fake social media users designed to mimic Americans. Whoever or wherever he really is, Harlan is a small part of a larger effort by U.S. adversaries to use social media to influence and upend America’s political debate.
The account was traced back to Spamouflage, a Chinese disinformation group, by analysts at Graphika, a New York-based firm that tracks online networks. Known to online researchers for several years, Spamouflage earned its moniker through its habit of spreading large amounts of seemingly unrelated content alongside disinformation.
“One of the world’s largest covert online influence operations – an operation run by Chinese state actors – has become more aggressive in its efforts to infiltrate and to sway U.S. political conversations ahead of the election,” Jack Stubbs, Graphika’s chief intelligence officer, told The Associated Press.
Intelligence and national security officials have said that Russia, China, and Iran have all mounted online influence operations targeting U.S. voters ahead of the November election. Russia remains the top threat, intelligence officials say, even as Iran has become more aggressive in recent months, covertly supporting U.S. protests against the war in Gaza, and attempting to hack into the email systems of the two presidential candidates.
China, however, has taken a more cautious, nuanced approach. Beijing sees little advantage in supporting one presidential candidate over the other, intelligence analysts say. Instead, China’s disinformation efforts focus on campaign issues particularly important to Beijing – such as American policy toward Taiwan – while seeking to undermine confidence in elections, voting, and the U.S. in general.
Officials have said it’s a longer-term effort that will continue well past Election Day as China and other authoritarian nations try to use the internet to erode support for democracy.
Chinese Embassy spokesperson Liu Pengyu rejected Graphika’s findings as full of “prejudice and malicious speculation” and said that “China has no intention and will not interfere” in the election.
X, the platform formerly known as Twitter, suspended several of the accounts linked to the Spamouflage network after questions were raised about their authenticity. The company did not respond to questions about the reasons for the suspensions, or whether they were connected to Graphika’s report.
TikTok also removed accounts linked to Spamouflage, including Harlan’s.
“We will continue to remove deceptive accounts and harmful misinformation as we protect the integrity of our platform during the US elections,” a TikTok spokesperson wrote in a statement emailed on Tuesday.
Compared with armed conflict or economic sanctions, online influence operations can be a low-cost, low-risk means of flexing geopolitical power. Given the increasing reliance on digital communications, the use of online disinformation and fake information networks is only likely to increase, said Max Lesser, senior analyst for emerging threats at the Foundation for Defense of Democracies, a national security think tank in Washington.
“We’re going to see a widening of the playing field when it comes to influence operations, where it’s not just Russia, China, and Iran but you also see smaller actors getting involved,” Mr. Lesser said.
That list could include not only nations but also criminal organizations, domestic extremist groups and terrorist organizations, Mr. Lesser said.
When analysts first noticed Spamouflage five years ago, the network tended to post generically pro-China, anti-American content. In recent years, the tone sharpened as Spamouflage expanded and began focusing on divisive political topics like gun control, crime, race relations, and support for Israel during its war in Gaza. The network also began creating large numbers of fake accounts designed to mimic American users.
Spamouflage accounts don’t post much original content, instead using platforms like X or TikTok to recycle and repost content from far-right and far-left users. Some of the accounts seemed designed to appeal to Republicans, while others cater to Democrats.
While Harlan’s accounts succeeded in getting traction – one video mocking President Joe Biden was seen 1.5 million times – many of the accounts created by the Spamouflage campaign did not. It’s a reminder that online influence operations are often a numbers game: the more accounts, the more content, the better the chance that one specific post goes viral.
Many of the accounts newly linked to Spamouflage took pains to pose as Americans, sometimes in obvious ways. “I am an American,” one of the accounts proclaimed. Some of the accounts gave themselves away by using stilted English or strange word choices. Some were clumsier than others: “Broken English, brilliant brain, I love Trump,” read the biographical section of one account.
Harlan’s profile picture, which Graphika researchers believe was created using AI, was identical to one used in an earlier account linked to Spamouflage. Messages sent to the person operating Harlan’s accounts were not returned
This story was reported by The Associated Press