Privacy advocates reject Europe's 'code of conduct' for online speech
While European officials say the code will spur Microsoft, Twitter, Facebook, and Google to strip hate speech from their platforms, civil liberty and privacy groups worry that overreaching enforcement will violate users' rights.
Regis Duvignau/Reuters
In an effort to blunt the spread of racist and extremist content on the web, European Union states along with Google, Twitter, Facebook, and Microsoft have agreed on a so-called "code of conduct" to review – and then delete at their discretion – suspected hate speech.
But some civil liberty and Internet advocacy groups worry that anointing tech companies as guardians against offensive speech raises privacy concerns for users and concerns about the companies' overzealous enforcement of the code.
"The code requires private companies to be the educator of online speech, which shouldn’t be their role, and it’s also not necessarily the role that they want to play," says Estelle Massé, the EU policy analyst with Access Now, a nonprofit digital advocacy organization based in Brussels.
The code is meant to encourage companies to become more vigilant at removing content that violates their own terms of service but that doesn't necessarily violate European law. The problem for civil liberty groups such as Access Now is that companies may monitor for and remove content merely because it’s controversial and they feel they face a liability by leaving it online, says Ms. Massé.
In a statement about the code, Access Now said: "Countering hate speech online is an important issue that requires open and transparent discussions to ensure compliance with human rights obligations."
The code will be reviewed by EU Justice Ministries next week but is otherwise final. It grew out of pressure on Internet companies operating in Europe to do more to remove extremist content and propaganda from their networks in the aftermath of the Islamic State attacks in Paris.
Initially, a forum to counter online hate speech that took shape at the end of 2015 initially included groups such as Access Now and European Digital Rights (EDRi), a civil society association based in Brussels. Those groups recently pulled out of talks about the code over differences with the other members of the forum, saying that the European Commission and US tech companies dictated the terms of the new code.
"This nonbinding code lets companies to remove content, especially on the basis of terms of service and not on the law,” says Maryant Fernández, advocacy manager at EDRi. "The outcome of these processes is that you are actually privatizing the enforcement of human rights."
Still, European officials say the code is a critical tool for combating the spread of extremist content that encourages violence.
"Social media is unfortunately one of the tools that terrorist groups use to radicalize young people and racists use to spread violence and hatred," said Vĕra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, in a statement on Tuesday. "This agreement is an important step forward to ensure that the Internet remains a place of free and democratic expression."
Europol, the European law enforcement agency, recently created Internet referral units that gives companies information when something illegal is posted on their website. However, it’s up to individual companies whether they will remove the content.
The code defines hate speech as "all conduct publicly inciting to violence or hatred directed against a group of persons or a member of such a group defined by reference to race, color, religion, descent or national or ethnic origin."
But there's no consensus across Europe when it comes to identifying that kind of speech as EU members varying interpretations of extremist or criminal language. Denying the Holocaust, for example, constitutes a crime in Germany but not elsewhere in the EU.
Instead of having tech companies police the web for offensive speech, says Burkhard Schröder, cofounder of the nonprofit German Privacy Fund, Internet users should be the ones who decide what's appropriate and what's not.
“If someone posts something anti-Semitic on Facebook, there will be 1 million people posting against it,” says Mr. Schröder. "It’s better to let the brains of the people than a company decide."