Who should judge what's true? Tackling social media's global impact.
Kham/Reuters
Hong Kong and Berlin
“Truth” Take 1: In 2019, a peaceful pro-democracy movement in Hong Kong stunned the world; at its peak nearly 2 million Hong Kongers gathered in the streets to oppose a move they felt would erode their beloved city’s autonomy.
“Truth” Take 2: In 2019, violent pro-democracy protesters smashed windows, stormed Hong Kong’s legislative chamber, and stockpiled petrol bombs. Aided by anti-China foreign forces, the rioters were a radical fringe element who drew fewer than 350,000 at the movement’s height.
The continuing struggle for Hong Kong’s future is being fought not only between police and demonstrators for control of the streets, but also online in the digital sphere, as Beijing and protesters vie for control of the political narrative. And, in the digital world, there is always more than one version of the “truth.”
Why We Wrote This
Social media can spread positive vibes or dangerous lies. But whom do we trust to decide which is which? Governments? Mark Zuckerberg? And how much responsibility lies with individual news consumers? Part 10 in our global series “Navigating Uncertainty.”
“Anyone can publish anything,” says Johannes Hillje, a Berlin-based expert on social media and author of “Propaganda 4.0,” a book on German populism. “We still have no quality checks on this phenomenon.”
The result? Widespread confusion, just as social media networks command ever-greater shares of the global attention span. Last year, more than half the world’s population read something online that they believed to be true, before realizing it was false, according to Statista, an online business data portal.
This matters – because informed citizens are essential to the fight against ills such as climate change and poverty, and ultimately, to flourishing democracies. There are signs, say experts, that we are growing more discerning and warier of misinformation as we go online. A growing number of organizations are promoting media literacy and the importance of nonpartisan information sources, while social media platforms are under increasing public pressure to scrub misinformation.
Social media companies such as Facebook and Youtube have created and connected communities on a scale barely imagined a decade ago. Yet they have also propagated echo chambers of falsehoods that spread around the globe at the tap of a finger; the COVID-19 pandemic, the Hong Kong pro-democracy protests, and the rise of the far-right in Germany offer vivid illustrations.
How do we collectively tackle this problem? Who should judge what is true, and whether those who lie have the same right as anyone else to express themselves online? Is the idea that governments should police online content any more acceptable than the idea that for-profit companies such as Facebook or Twitter should fill that role?
“Now gatekeeping has to be done by every individual,” Mr. Hillje says. “We haven’t learned this as a society. That takes much more time than it does to introduce new technologies.”
The COVID-19 dimension
Digital citizens have been given a crash course by COVID-19, as misinformation about the coronavirus has spread widely.
Every aspect of the pandemic has been subject to online falsehoods, from where it originated to how infectious it is (deliberately released by a Chinese scientist? Secretly brought to China by a U.S. soldier?).
“Pandemics are fertile ground for people being vulnerable to conspiracy theories, because there’s a lot of uncertainty and fear,” says John Cook, a researcher at George Mason University in Virginia who studies media narratives around climate change. “But misinformation can kill.”
President Donald Trump himself has been a source of misinformation, tweeting approval of unproven methods of combating COVID-19. But he was angered when Twitter took the unprecedented step of cautioning users against the president’s “unsubstantiated” claims about the reliability of mail-in votes, and flagged another of his tweets as “glorifying violence.”
Building a party on misinformation
In Germany, it is the far-right Alternativ für Deutschland political party that has mounted the most successful social media strategy – and political misinformation campaign – in recent history. Standing on an anti-Muslim, anti-immigrant platform, the AfD has doubled its membership over the past seven years, largely on the back of online campaigns that spread inaccuracies, play on voters’ emotions, and highlight scandals that polarize and provoke.
That’s exactly the type of content that platforms’ algorithms tend to promote. Research has shown that tweaks to Facebook and YouTube algorithms over the years – designed to increase “engagement” – have also boosted divisive posts that provoke outrage. In 2019, the most-shared posts on Facebook had to do with child trafficking and abortion, according to the social media tracking company NewsWhip.
Few issues in Germany have been as politically divisive as migration, which the AfD has used to its advantage. The party’s social media posts have overstated the number of migrants seeking asylum in Germany by up to a million. Its leaders have falsely claimed that foreigners commit more crimes than Germans, and cautioned that Europe was becoming “Eurabia” with an advertisement depicting a white woman surrounded by Muslim men.
The party has created an “alternative media universe,” says Mr. Hillje. “They delegitimize mainstream media, and try to create a collective identity among their followers and their audience.” AfD Facebook posts are shared five times more often than posts by traditional parties, and the majority of political retweets relate to the AfD.
The AfD, however, says social media allows it to bypass mainstream media that “are heavily against and unfair to the AfD,” complains Ronald Glaser, the party’s lead Berlin spokesperson. “For our competitors, it’s not so important,” he says. “If they want to get the message out they call television.”
The AfD is now the third largest party in the German parliament, with the power to shape the mainstream political narrative.
Lies and video tape
Nearly a year after pro-democracy protests began in Hong Kong, the Chinese government moved last month to impose a new national security law that would further erode the former British colony’s political and cultural autonomy. That has given the protests new urgency.
Since the demonstrations began, pro-Beijing forces have mounted a potent misinformation campaign that sought to paint protesters as violent rioters aided by foreign agents. The goal was to turn mainland Chinese citizens and world public opinion against the movement.
Spreading “lies” was one part of a “three-pronged approach of censorship, surveillance, and misinformation,” says Lokman Tsui, a tech analyst and Google’s former head of free expression in Asia. The strategy is “sophisticated and weaponized to be on the offensive,” he says.
Chinese state media websites have misrepresented events, for example, by reporting a demonstrator’s police-inflicted eye injury as the work of a fellow protester, or falsely tweeting that 2 million Hong Kongers had signed a petition that Beijing be granted more power.
Pro-Beijing forces have also used fake social media accounts to post content, purchased ads on social media platforms, and encouraged Chinese officials and supporters abroad to tweet.
They are also seeking to sow division within private pro-democracy messaging groups.
“It’s really hard for people to realize what the Communist Party is doing,” says Isaac Cheng of the pro-democracy political group Demosisto. “Their aim is to demoralize the entire movement.”
On the flip side, protesters have posted thousands of videos and images to bring their story directly to the public. Their narrative has sometimes penetrated: U.S. House Speaker Nancy Pelosi referred to “2 million protesters” as she urged Congress to support the “impressive … young people speaking out for democratic freedoms in Hong Kong.”
The true size of the protest marches last year? Around 800,000 at their peak, according to independent parties who relied on artificial intelligence and crowd-density measurements.
As a new round of protests gears up, the battle for global public opinion continues online, as fiercely as it does on the streets.
Who decides?
Who should be responsible for monitoring the content that fills our screens every day? “Who gets to decide what is an information operation or what is impulsive manipulation?” asks Evelyn Douek, an attorney and media rights researcher at Harvard University.
In the Hong Kong case, Twitter announced last August that it would no longer allow “state-controlled news media entities” to purchase advertising. Facebook and Google have also taken down some posts, fake accounts, and links, but offered little transparency about what they were doing, says Ms. Douek.
This patchwork approach illustrates a real challenge. Currently, policing content requires “private, for-profit companies to decide people’s speech rights all around the world,” Ms. Douek says. “This is untenable and unacceptable.”
Facebook founder Mark Zuckerberg does not seem keen on taking that responsibility. “I don’t think that Facebook or internet platforms in general should be arbiters of truth,” he told CNBC in an interview last month.
Facebook and YouTube have hired thousands of independent monitors to view content that’s flagged for review, and to remove inappropriate posts. But Mr. Zuckerberg acknowledged in his CNBC interview that the fact-checkers’ job is only to “catch the worst of the worst stuff.” He is now facing an open revolt among employees who want Facebook to do more.
On the other hand, inviting governments to get too involved in regulating speech amounts to “walking on ice,” says Stephan Mundges, a digital communications researcher at Dortmund Technical University. “If we don’t have robust democratic institutions, any way of trying to regulate disinformation means that freedom of speech is in danger,” he warns.
Take the example of South Africa, where the government has made it a crime to post COVID-19 misinformation, and also to criticize government, says Mr. Cook, the climate communications researcher. “It’s a very slippery slope.”
How the Germans do it … and the Americans
Trying to keep its footing on that slope is Germany. In the widest-ranging action by a Western democracy to date, the government two years ago compelled media platforms to identify and remove any content defined by the law as “illegal.” There are 21 categories, and platforms can be fined up to 5 million euros for failing to remove within 24 hours such content after it is flagged.
Human Rights Watch has declared the law vague and overly broad, and policymakers are currently considering changes including allowing users to appeal decisions. The effectiveness of the law is unclear; six months after it was passed, only a small percentage of reported content had been removed; Facebook had taken down only 20% of the 1,700 posts reported, and Twitter had removed just 10% of items reported as illegal by users.
The United States, as the world’s largest economy and headquarters for the globe’s most popular social media companies, is arguably the most important place in which to get the balance between digital freedom and responsibility right.
Being home base “gives regulators a certain degree of power over platforms that other jurisdictions don’t have,” says Ms. Douek. Yet the Communications Decency Act, which President Trump has sought to amend by executive order, essentially shields companies from legal liability for the content posted on its platforms. To date, any action must be largely prompted from within the company.
Michael Quinn contends that society should be thinking about ethics and responsibility even farther upstream – when systems are being designed. Dean of Science and Engineering at Seattle University, and author of the popular text “Ethics for the Information Age,” Dr. Quinn has made a mission of educating both computer science students and tech executives alike.
A significant challenge around artificial intelligence, he says, is the bias inherent in the data used to train these systems. “If the data is being collected from the racist past, you could be building a future in which automated systems are making racist decisions,” explains Dr. Quinn.
He points to a real-life example – a hospital monitoring system to recommend which patients should receive early interventions. Engineers found the AI system was recommending treatment more often for whites than for blacks, because historical data showed whites receiving more treatment.
But that was just because they had more money. “There need to be some guardrails,” Dr. Quinn warns.
The answer – think harder
Even if online gatekeeping institutions strike an acceptable balance, individual netizens will still tend to seek out information that supports their opinions. “Confirmation bias existed before social media,” says Mr. Hillje, the Berlin policy analyst. “This is a human phenomenon, not a digital phenomenon.”
John Gable, a former Silicon Valley insider who is now CEO of AllSides.com (which has a partnership with CSMonitor.com), intends to expose people to information from all political perspectives. “So people are empowered to decide for themselves,” Mr. Gable says.
AllSides.com places left-, center- and right-leaning news streams side-by-side, aiming to media “filter bubbles,” those echo chambers of information that build around single sources. Bursting these bubbles is easier than it may seem: Research has found 9 out of 10 students showed improved understanding after just one conversation with others across a spectrum of viewpoints.
Ultimately, the solution must include stronger critical thinking by individuals, argues Mr. Cook, the climate change communications researcher. People must constantly ask themselves, “What kind of filters do I use? What stereotypes do I have when I consume information? Where did this information come from?”
There are signs that people going online are doing this mental work. A Pew Research Center survey in 2018 showed 78% of Americans prefer news from sources without a partisan slant, up 14 points from five years ago. Last year, a Pew survey also found more Americans were concerned about made-up news than about climate change, racism, illegal immigration, and terrorism.
“The public intolerance for misinformation, and the danger of it, is so much stronger now than before,” says Mr. Cook. “The pandemic is a terrible situation, but there’s potentially the opportunity to build a new public resilience” on the foundations of new public awareness.
Society is building toward an inflection point after years of watching institutions upended by “fake news,” says AllSides.com’s Mr. Gable.
“There are thousands of organizations now trying to bridge the divide and have people talk to each other, both for our own health and to make governments more functional,” he adds. “We are scaling up our efforts.”