In terror fight, tech companies caught between US and European ideals

Amid European pressure to crack down on terrorist content, US-based tech companies struggle to strike a balance between American and European concepts of censorship and freedom of speech.

Police spokesman Stefan Redlich presents confiscated phones, weapons, drugs, and a computer during a press conference in Berlin in April 2016. Berlin police say they have raided ten residences in the German capital in a crackdown on social media hate speech postings on Facebook, Twitter and other social media.

Bernd von Jutrczenka/dpa/AP

June 23, 2017

A spate of terrorist attacks in London, and reports that the attackers may have been radicalized online, has prompted leaders of several European countries to propose legislation that would hold technology companies accountable for the distribution of terrorist content on their platforms. 

Google and Facebook have both responded with internally developed strategies to combat the use of their websites to recruit and inspire would-be terrorists. At first glance, these efforts are an attempt to address European concerns, in hopes of preempting legislation that would likely include punitive measures for companies that fail to adequately censor terrorist content.

But on a deeper level, they are an attempt by some of the most prominent gatekeepers of the World Wide Web to balance conflicting US and European ideals around freedom of speech and censorship. And they highlight existing tensions between desires for an open, democratic flow of information and for a platform that doesn’t host hate speech, terrorist propaganda, or other harmful ideas.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

“What we’re seeing and grappling with as a society is, what does it mean when potentially every individual has access to a global platform for their ideas and expression?” says Emma Llansó, the director of the Free Expression Project at Center for Democracy and Technology (CDT), a Washington-based nonprofit that advocates for a free and open internet.

“That’s not something that our societies have had to deal with before, and that’s what’s underlying a lot of these tensions here. The internet famously removes gatekeepers from our ability to access information or express ourselves.”

Mounting pressure

The criticisms that tech companies aren’t doing enough to monitor extremist content started increasing two years ago, particularly after the Charlie Hebdo terrorist attack in Paris in early 2015, say experts. In the past few months, those calls for an online crackdown have grown louder, in the wake of more attacks by terrorists who seem to have been inspired in part by videos they found online.

After recent attacks in London, British Prime Minister Theresa May and French President Emmanuel Macron said they are developing plans to fine internet companies if they fail to remove “extremist” propaganda from their platforms. Ms. May has been a frequent critic of tech companies, and has promised to “regulate cyberspace.” In Germany, meanwhile, a draft law approved by the German cabinet earlier this spring would fine companies up to 50 million Euros if they fail to remove hate speech and fake news within 24 hours of having it reported (companies would have seven days to deal with less clear-cut cases).

With terrorist recruitment increasingly moving into social media and online platforms, such calls are understandable, American critics say, but the sort of laws being proposed in Europe risk tipping the balance toward a dangerous censorship.

Howard University hoped to make history. Now it’s ready for a different role.

“It’s so easy to point to the need for internet companies to do more that that becomes a real rallying cry,” says Daphne Keller, the director of Intermediary Liability at Stanford Law School’s Center for Internet and Society, and a former associate general counsel to Google. “In European lawmaking, they don’t have very good tech advice on what’s really possible. And the cost of a badly drafted law won’t fall on their constituents, so the temptation to engage in magical thinking is very great.” With a law like the one Germany has proposed, she and other say, the only way to comply with it would be to remove everything flagged, since there’s no time allowed for nuanced decisionmaking.

Silicon Valley response

With the threat of punitive laws looming, Google and Facebook have taken public steps to bolster their policies around extremist content.

Earlier this week, Google announced a four-pronged effort that includes increased use of technology to identify terrorist-related videos, additional human “flaggers” to make decisions about what to remove, warning labels and removal of advertising on potentially objectionable videos, and a revived effort to direct potential terrorist recruits toward counter-radicalization videos.

Facebook made similar announcements last week, including increased use of artificial intelligence to stop the spread of terrorist propaganda and plans to hire 3,000 more people over the next year to review the more nuanced cases.

“Our stance is simple: There’s no place on Facebook for terrorism,” the company announced in its statement.

But the fixes are often far from simple, say experts, and companies walk a fine line as they try to provide a platform that encourages an open exchange of ideas while also seeking to make that platform as safe as possible. Add to that the current legislative environment in Europe, and some observers worry that the pressure will cause companies to remove too much content, and put Google, Facebook, and others in the role of censor.

In the United States, home to both Google and Facebook as well as several other major players in the social media space, issues of censorship are not taken lightly. Freedom of speech is enshrined in the First Amendment to the US Constitution, a principle that informs American views of how the internet should be used.

“We’re delegating decisions about the kinds of things that, if it was a Supreme Court decision, would be incredibly sensitive, and we’d be hanging on every word,” says Ms. Keller. “Instead, we have private companies doing it in back rooms.”

Who gets to decide?

Legally, private companies clearly have the right to enforce their own terms of service, and decide what content to remove, but critics say the increased pressure from governments on those companies starts to blur who is making the decisions.

As the European Union puts significant pressure on the major social-media platforms to remove hate speech and “extremist” speech, “We're seeing companies try to placate the EU commission and national governments by adopting changes to their terms of service,” says Danielle Citron, a cyber-harassment expert at the University of Maryland School of Law who has worked with tech companies for a decade.

Since terms of service apply globally, this has the effect of making EU speech norms apply to everyone, even though in many cases their definitions of hate speech and extremist speech are very broad – so broad that Professor Citron says they can “easily encompass political dissent” and turn into what she calls “censorship creep.”

“One politician's terrorist speech is another person's political dissent,” she says.

Under that sort of pressure, Citron says, “who becomes the censor is really the government."

Ms. Llansó of CDT says that some moderation by social-media companies is clearly appropriate – but emphasizes the need for both transparency and an avenue for appeals. And she also cautions against making rash decisions due to public fear. In the wake of repeated terrorist attacks, the need for immediate action can seem imperative, and social media companies are an easy and attractive target.

But, says Llansó, there is an even stronger imperative to take the time to get this right, because of the implications it may have on the openness and rules of the internet well into the future.

“We’re not only talking about how do we deal with terrorist propaganda on social-media platforms,” Llansó says. “The policies and practices we develop now are what will shape our entire [online] infrastructure for the next few decades…. These are standards that will be applied to every challenging issue around free speech we encounter in the coming years.”