From fake news to fabricated video, can we preserve our shared reality?
Loading...
From the instant replay that decides a game to the bodycam footage that clinches a conviction, people tend to trust video evidence as an arbiter of truth.
But that faith could soon become quaint, as machine learning is enabling ordinary users to create fabricated videos of just about anyone doing just about anything.
Earlier this month, the popular online forum Reddit shut down r/deepfakes, a subreddit discussion board devoted to using open-source machine-learning tools to insert famous faces into pornographic videos. Observers say this episode represents just one of the many ways that the this technology could fuel social problems, particularly in an age of political polarization. Combating the negative effects of fabricated video will require a shift among both news outlets and news consumers, say experts.
“Misinformation has been prevalent in our politics historically,” says Brendan Nyhan, a political scientist at Dartmouth College in Hanover, N.H., who specializes in political misperceptions. “But it is true that technology can facilitate new forms of rumors and other kinds of misinformation and help them spread more rapidly than ever before.”
So-called fake news has been around long before Macedonian teenagers began enriching themselves by feeding false stories to social media users. In 1782, Benjamin Franklin printed a falsified supplement to the Boston Independent Chronicle maligning Seneca Indians in an attempt to influence public opinion during peace negotiations with Britain.
In 1835, the New York Sun became the world’s bestselling newspaper after it reported on the discovery of bat-like humanoid creatures living on the moon. By the end of the 19th century, some of the United States' largest news platforms – William Randolph Hearst’s International News Service, the United Press Associations, and the Associated Press – were distributing false stories that trade unionists and socialists condemned as “fake news.”
“The term ‘fake news’ as a kind of an epithet kind of rises and falls,” says Pennsylvania State University media studies professor Matthew Jordan, noting that it resurged around World War II. “It seems to really key in on conflict.”
But in video form, so-called fake news may represent something new.
Seeing is believing
“When you see something, or when you believe that you’re seeing something and hearing something, it has a much more visceral impact on you, by and large, than when it’s something that you’re just reading about,” says Henry Farrell, a professor of political science at George Washington University in Washington.
“So I suspect that as we begin to see people really begin to use these techniques, we’re going to see content being circulated which really punches you in the solar plexus in a way that wasn’t true of other forms of fake news,” he says.
Professor Farrell warned that this technology’s “implications for democracy are eye-opening,” in a Feb. 4 New York Times op-ed written with political historian Rick Perlstein.
“Democracy assumes that its citizens share the same reality,” the op-ed concluded. “We’re about to find out whether democracy can be preserved when this assumption no longer holds.”
The face-swapping tools use open-source machine-learning tools like TensorFlow, which is distributed freely by Google. Using publicly available images of a person’s face, it can train a neural network to swap out the face from an original with a new face that mimics the original expressions.
In order to work effectively, the deep-learning network needs to be trained on existing images of a person’s face.
Another technology, called VoCo, developed in 2016 by Adobe Systems, the makers of Photoshop, lets users take an audio recording and alter it to include novel words and phrases in the original speaker’s voice. All it needs is 20 minutes of recorded speech.
To be sure, deep-learning generated faces still look a bit wooden. A video that replaces Hillary Clinton's face with that of President Trump comes across as pixellated and clearly altered. A video that appears to show Nicolas Cage portraying Lois Lane is smoother, but still it triggers the so-called uncanny valley effect, an eeriness produced by an almost-but-not-quite real face. But the technology is improving, and, when mixed with confirmation bias – the tendency to process information in a way that conforms to one’s preexisting beliefs – it could become an increasingly destructive social influence, one that corrodes even good-faith efforts to tell the truth.
“Not only does this mean that it’s going to be easier for people to pass [off] fake stuff as being real,” says Farrell. “It’s also going to be easier for people to pass [off] real things as being fake too.”
That uncertainty about real news content is Dartmouth Professor Nyhan’s larger concern. “People’s baseline level of trust in the news may further decrease, which is corrosive,” he says.
Taking a stand for truth?
Countering fabricated videos will require news media to play a more active role in verification.
“I think [journalists are] going to have to take a more and more direct stance in saying we think that this is true and we think that that’s not true,” says Farrell. “That doesn’t have to be a partisan stance, but it does have a stance in favor of the truth, because otherwise I think they’re going to continue to do quite enormous damage to democracy.”
But in the era of fake news, verification often isn’t enough. Even an easily debunked fake video can cast an innocent victim in a false light just by having its existence discussed in the news. Journalists will also have to better practice curation, says Professor Jordan of Pennsylvania State.
“I kind of have this feeling that what happens a lot in editorial rooms is they’re like ‘Oh, this is a story with buzz. We’ve got to cover it because everybody’s covering it,’ ” Jordan says. “But do I care that Kim Kardashian broke the internet again?”
Jordan cites the professionalization of journalism beginning at the turn of the century, which established industry-wide norms and standards and shifted the job’s class composition to white collar. “This is going to require another kind of wholesale readdressing of what newsrooms are doing.”
News consumers, too, will no doubt shift their expectations.
“One possibility is that we all become a lot more skeptical about the media that’s put in front of us,” says Tim Hwang, director of the Ethics and Governance of Artificial Intelligence Fund at the Berkman-Klein Center and the MIT Media Lab in Cambridge, Mass.
Mr. Hwang notes how even visual effects in movies from a decade ago, ones that seemed utterly convincing at the time, now seem fake. “I have a lot of faith in our ability to learn the tricks. Maybe the uncanny valley may shift as technology shifts.”
Editor's note: An earlier version misstated the name of Dartmouth College.