Influencers: Tech firms should do more to block extremist content

A slim majority of Passcode Influencers said that US tech companies should ramp up efforts to remove extremist content from their platforms.

Illustration by Jake Turcotte

January 27, 2016

A slim majority of Passcode Influencers said that US tech companies should ramp up efforts to remove extremist content from their platforms.

Social media companies came into the spotlight in the wake of the Paris and San Bernardino, Calif., attacks. After reports swirled that the California attackers posted support for Islamic State (IS) on social media, presidential candidates and policymakers urged Silicon Valley to help thwart the terrorist group’s use of US social media and online platforms to spread its message.

Though subsequent reports found the California shooters did not post publicly, the question of social media monitoring still became a hot topic in Washington, and senior Obama administration officials recently met with top tech company executives to discuss ways to collaborate to combat extremism online.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

Fifty-three percent of Passcode’s pool of more than 130 digital security and privacy experts hailing from across government and the private sector said tech companies can do more in this realm.

“Much of the most effective extremist propaganda already violates the terms of service of various social media platforms,” said Rep. Jim Langevin (D) of Rhode Island, who co-chairs the Congressional Cybersecurity Caucus. “US tech companies should ensure that they are monitoring posts for content that is presumptively removable and should maintain an open line of communication with law enforcement agencies and other organizations flagging these posts.”

Many companies insist they are already working hard to remove such content as it violates their terms of service. Nicole Wong, former US deputy chief technology officer, notes “US tech companies (generally) already do a lot, but... there’s always room for improvement and it should be an ongoing process.”

“With that said,” added Ms. Wong, who has also held senior posts at both Google and Twitter, “there should be meaningful transparency in the relationship between government and these technology companies. People need to trust these platforms, the companies that run them and the governments themselves, and transparency is an essential part of that. Second, there have been some who call for shutting down these global communications platforms or more broadly censoring them. That’s the wrong direction. It’s not only unrealistic, it moves us backward from the open, global Internet that has powered both economic development and global cooperation over the last two decades.”

The responsibility should not lie exclusively with the private sector, said one Influencer who chose to remain anonymous. “US tech companies understand the role that they play in working with governments and other stakeholders to stymie ‘extremist content,’ ” the Influencer said. “However, the burden cannot be placed on American firms alone — a genuine solution must be global in nature and be rooted in workable policy and laws.” To preserve the candor of their responses, Influencers have the option to comment on the record or anonymously.

Howard University hoped to make history. Now it’s ready for a different role.

Even some Influencers who supported companies stepping up their efforts on this front said it can be tricky to decide what exactly companies should do.

“The definition of ‘do more’ is the key,” said Nick Selby, cofounder of data analysis company StreetCred Software.

“I don’t think they should necessarily block it, but content and social media platforms have a fundamental responsibility to understand what they host and promote,” Mr. Selby continued. “When extreme opinion becomes potentially violent or exploitative, they have a societal responsibility to report it, with context and examples, for further analysis by authorities. This generally works with child exploitation content – but even in that field, many tech companies do the bare minimum of what is legally required.”

 

Reaching an objective consensus on what is extremist content could also be hard to determine, some Influencers said. 

“I said yes, but that was because I had to pick,” said Jon Callas, chief technology officer of encrypted communications firm Jon Callas. “This [poll comes after] Martin Luther King day. Fifty years ago, MLK was controversial to the point that the FBI targeted him with threats and calls for him to commit suicide. Would he be [considered] extremist? Today we have presidential candidates who call for things that are outright against the Constitution, like religious tests on public office. Is that extremist? And yet there are things that I think can be agreed to be extremist, like beheading journalists on camera. If one assumes we can get a reasonable definition of ‘extremist,’ then yes.”

A 47 percent minority said more intense content monitoring from companies is not the answer. Some of them chose that position because they said decisions about what kind of content counts as “extremist” shouldn’t be left up to companies.

“Too often we see that such policies, usually implemented by low-level workers who are ill-equipped to understand the context, and often even the language, of the discussions they are supposed to be policing,” said Cindy Cohn, director of the Electronic Frontier Foundation. “Political and religious speech is regularly censored, as is nudity. In Vietnam, Facebook’s reporting mechanisms have been used to silence dissidents. In Egypt, the company’s “real name” policy, ostensibly aimed at protecting users from harassment, once took down the very page that helped spark the 2011 uprising.

“And in the United States, the policy has led to the suspension of the accounts of LGBTQ activists,” Ms. Cohn continued. “Examples like these abound, making us skeptical that a heavier-handed approach by companies would improve the current state of abuse reporting mechanisms. So while urging companies to ‘do more’ sounds good – and conveniently shifts attention away from the failures of the intelligence agencies – it’s an approach that won’t work and can easily backfire.”

It might also prove less ineffective in the goal of countering terrorists online, some Influencers argued. “Given the choice of having extremists posting content on US-run sites (which can be monitored by US intelligence agencies) and foreign-run sites (which are unlikely to turn over data to the US government), the choice should be obvious,” said one Influencer who chose to remain anonymous.
Another Influencer, who preferred to remain anonymous, noted that tech companies’ are already monitoring extremist content “and it’s proven to provide little to no law enforcement benefit.”

“An alternative,” the Influencer continued, “is to simply require some filter to say that ‘some people find this content inappropriate’ and hide it behind authentication and opt-in, so that it doesn’t bother normal users. YouTube uses a similar tactic for adult content, as an example. Not only is true censorship ineffective, but it contradicts the fundamental concepts of freedom of speech. If you block them they will go elsewhere, but it will be harder to track extremists as a result.”

What do you think? VOTE in the readers’ version of the Passcode Influencers Poll

For a full list of Passcode Influencers: Check out our interactive masthead

 

Comments

YES

“Free speech has never been without provisional constraints for the good of the public, which I think is both proper and should be the governing rule for containing extremist posts on social media.” - Jeffrey Carr, Taia Global

“The tech companies, our government, and our allies should come up with a comprehensive proposal to shut down terrorist access and recruiting on the Internet.” - Influencer

“First amendment doesn’t apply to private companies, although free speech principles do.” - Influencer

“They do more than is currently recognized but this has to be a top priority.” - Influencer

“While blocking may not be the most imaginative way to combat terrorism online, and it runs the risk of political correctness, it is better than nothing.” - Influencer

“This is first and foremost a business model issue rather than a legal, regulatory, or moral one. Platforms will be held responsible for what is on their platforms. The days of ducking this reality by claiming ‘neutral platform’ or ‘user generated content’ are coming to an end and will probably end abruptly.” - Influencer

“Right now they are less legally obligated to do so than in the case of other potential crimes (e.g., child pornography) and there is little to justify such a difference. All that being said, it won’t turn the tide as the opportunities to communicate are so broad that regulation of the ‘responsible companies’ will in the long run be a drop in the proverbial Internet bucket.”  -Influencer

 

NO

“Although social media companies are within their rights to censor undesirable content on their platforms, is that really the best strategy? The dialogue on social media draws extremists out into the open and provides an opportunity to challenge and discredit their ideas. A commitment to the principle of freedom of expression means having confidence that in the end, the right ideas will prevail.” - Tom Cross, Drawbridge Networks

“No. The reason for the ‘no’ is that the question is limited to ‘extremist content.’ Any algorithm for extremism detection would be [crap] from the get-go. The policy choice is this: either these ‘US tech companies’ are common carriers or they are liable for their content. As that is the real and only policy choice, make those ‘US tech companies’ choose which they want to be: You can charge whatever you like based on the contents of what you are hosting or carrying, but you are responsible for that content; inspecting brings with it a responsibility for what you learn. Or, you can enjoy common carrier protections at all times, but you can neither inspect nor act on the contents of what you are hosting or carrying and can only charge for hosting or carriage itself. Bits are bits. This would incidentally solve many other problems at the same time. Extremism defense is simply a trendy talking point.” - Dan Geer, In-Q-Tel

“Rather than serving as hall monitors, tech companies should focus on what they’re good at: empowering people with access to online services, especially those affected by extremism. Giving moderates a louder voice will help drown out the extremists.” - Chris Finan, Manifold Security

“It will be impossible to determine what is extremist (and actionable) information and firms won’t allow themselves to be placed in that position.” - Influencer

“The role of most tech companies on the Internet is as intermediaries. Intermediaries - most of them small businesses - can’t be responsible for tracking, monitoring, and determining the legality of content.” - Christian Dawson, Internet Infrastructure Coalition

“US technology companies are already working to address issues tied to extremist content in order to serve the best interests of communication and connection while striving to hold human rights, human dignity and human life in the highest regard. Tech companies are trending toward making a better effort. They can and will do more. Instead of Chief Privacy Officers they should be hiring and giving significant power to Chief Civil Liberties Officers. The CCLO would be involved in preserving free speech while balancing that with challenges that arise from bad actors online.” -Influencer

“We should use the same data and skills used to convince someone to make an online purchase to reject extremist ideologies. This requires moving beyond a conversation about blocking content online, and having a more nuanced debate about how to best respond to extremism and propaganda online. Importantly, U.S. Department of State should issue grants to help develop automated tools to respond to hate speech. It is not enough to just turn this problem over to the private sector.” - Daniel Castro, Information Technology and Innovation Foundation

“Ike once said, ‘If you want total security, go to prison. There you’re fed, clothed, given medical care and so on. The only thing lacking...is freedom.’ We need to heed these words. Restriction of the right to freedom of speech as a mechanism for assuring better security is a step toward a future we would be trying to protect against.” - Influencer

“Extremist by whose standards? In some countries, laws that block extremist content are used to quell freedom of speech.” - Influencer

“Our members are responsible corporate citizens. They work hard to strike a balance between legitimate law enforcement concerns and other important values such as speech. Liability regimes that are over inclusive can have a chilling effect on speech without necessarily furthering law enforcement goals.” - Abigail Slater, Internet Association

“It is not tech companies job to make law enforcement’s job easier for them. ‘Going Dark’ is just another term for ‘we suck at our job.’” - Influencer

“Content policing is difficult to do fairly without hindering free speech and violating privacy. Careful terms of service and content agreements could be developed on a voluntary basis, but it shouldn’t be forced or mandated.” -Katie Moussouris, HackerOne

“Reasonable people may disagree about the extent to which system operators should take extraordinary measures to edit, censor, or filter online content so as to frustrate the goals of anti-social actors of any sort. It is a given that there are costs, both operational and social, incurred by commercial firms (and their host countries) when asked or ordered to curate or police online dialogue. One is national competitiveness: companies may rationally choose to move their business operations to countries that don’t impose these obligations. Another is reputation. Multi-national companies in particular tend to avoid tying themselves to national policies that their entire customer set may not share or even find relevant. Lastly, chilling effects to speech are a real possibility, and many reasonable people believe that the answer to distasteful or even evil speech is not curtailment, but rather more speech, from a range of perspectives.”  - Bob Stratton, MACH37

“There are already mechanisms in place to block content that is illegal in nature. Increasing capabilities and content filtering is a dangerous proposition for the Internet which relies upon a more open nature.” -Influencer

“Any proposal to remove content will have a foreseeable, adverse impact on legitimate information and conversations, infringing the human rights of affected users. We need a robust discussion that includes, at a minimum, agreement to basic standards and safeguards to protect users and their rights to free expression and access to information, including robust transparency mechanisms. The first step that companies should take is to publish clear, accessible guidelines on their current practices, policies, and activities in regard to this content, differentiating between public and private content, such as public posts and direct messages between users. These guidelines should include detail on how algorithms deal with information to prioritize or de-prioritize certain types of content that users see. Companies must then engage meaningfully with the public in a broad conversation about the efficacy and broad impact of this behavior and the very real degradation of human rights that could occur if these policies are misused or abused.” - Amie Stepanovich, Access

“No, but that is the wrong question. There are really three questions we should consider: 1: Should the government force the companies to block extremist content? No - absolutely not. 2: Should tech companies create spaces where their customers are not forced to read or view extremist positions? That is a business decision that is totally dependent on the bottom line and and the kinds of customers they are trying to cultivate. 3: Should the government allow tech companies to create those safe spaces and enforce the necessary rules to ensure it is safe? Absolutely. The problem with restricting extremist content is that the value of how extreme it is comes from the eye of the beholder. What is extremism to one person is patriotic to another. In the U.S., we are supposed to tolerate that kind of thing. Freedom of Speech is kind of a cornerstone to the entire country. You are allowed to think whatever the hell you want. You are allowed to say whatever the hell you want too (within certain limits that try to prevent physical harm). Burning the flag is not against the law. Saying racist things is not a crime. Condemning a religion is acceptable. They may be distasteful to you but the founding fathers built the nation on the idea the government should not control what you think; that we should be allowed to have a public discourse about ideas without the fear that you will be punished or suppressed. The U.S. has not always been good at following that philosophy, but as a rule, the country tends to come back to it over time. But if a tech company, or any company for that matter, wants to restrict extremist content, that is another thing altogether. The company executives are trying to build a successful business. If they need to cater to a specific kind of clientele in order to do that, then they should. I mean, if their content caters to the ‘Mary Poppins’ movie crowd, then it might behoove them to keep hate speech out of the venue or else risk driving away the very clients they were targeting. That said, I personally am not afraid of propaganda. It is just words put together to elicit an emotion. If you think we are losing our status in the world because we let crazies yell their hate speech in public spaces, you are thinking about it in the wrong way.” - Rick Howard, Palo Alto Networks

What do you think? VOTE in the readers’ version of the Passcode Influencers Poll