Should tech companies delete ISIS videos?
Tech companies are once again facing criticism for providing a forum for terrorist recruitment and training. But the decision to remove ISIS-sponsored content is more complicated than many assume.
Sipa/AP
The terrorist attack on London Bridge over the weekend has reignited a debate about tech companies’ level of responsibility in preventing terrorism. Hours after Saturday’s attack, British Prime Minister Theresa May called for a regulatory crackdown on online content and criticized the tech industry for giving extremist ideology “the safe space it needs to breed.”
London Mayor Sadiq Khan echoed that call in a statement Monday. “After every terrorist attack we rightly say that the internet providers and social media companies need to act and restrict access to these poisonous materials,” he said. “But it has not happened ... now it simply must happen.”
But analysts say pushing technology companies to remove extremist content may not be the straightforward solution it seems.
There are expected censorship concerns, but, it’s not as simple as free speech versus security. Some say removing content might not be effective in disconnecting Islamic State (ISIS) recruiters from potential recruits, and may even make it more challenging for intelligence agencies to monitor terrorist plots online. Others suggest focusing on online content is a distraction, and efforts should instead try to prevent those susceptible to extremist messages from seeking them out online in the first place.
These calls come as reports surfaced that one of the three attackers responsible for Saturday’s terrorist attack may have been radicalized by extremist sermons on YouTube.
ISIS videos and other materials have also surfaced online in the past year that highlight how to maximize damage with a vehicle and knife attack – a script that is eerily similar to the London Bridge attack that left seven dead and 48 injured.
The line between stifling speech and thwarting terrorism
The open nature of the internet has long been criticized by regulatory advocates as offering terrorists a free forum to circulate extremist content. By one count, as many as 90 percent of terrorist attacks in the past four years have had an online component to them. But those opposed to a regulatory approach cite concerns that cracking down on questionable content risks casting too broad a brush, censoring legitimate content.
When it comes to extremist content, treading that line is tricky. Unlike some content, such as child pornography, holding extreme views isn’t illegal – and neither is broadcasting them in the United States. As such, it takes a value judgment to decide which content to remove.
An algorithm can’t pick up on the necessary nuances to find the line between over-censorship and dangerous extremist content, says Aram Sinnreich, professor of communications at American University in Washington. “There are no paths that preserve anything remotely approaching an open internet, and at the same time preventing ISIS from posting recruitment videos.”
Many large tech companies have tried to compromise by employing an army of human workers to review content flagged by users as problematic. The reviewers use the tech company’s terms of use as guidance, but in the case of extremist content, it’s not always black and white.
But Hany Farid, senior adviser to the nonprofit Counter Extremism Project, says it is possible for an algorithm to find the sweet spot, as long as humans work with it. A computer science professor at Dartmouth College, Dr. Farid helped develop the tool now used by most internet companies to identify and remove child pornography. He has also developed a more sophisticated tool that he says can be harnessed to weed out extremist content.
Farid says internet companies’ concerns about crossing the line into censorship are unfounded.
“I’m not buying the story” that it’s too difficult or there’s a slippery slope leading to more censorship, Farid says. “That’s a smokescreen, saying there’s a gray area. Of course there is. But it doesn’t mean we don’t do anything. You deal with the black and white cases, and deal with the gray cases when you have to.”
Tech companies have gone through “an evolution of thinking” recently and are now more proactively removing content on their own, says Seamus Hughes, deputy director of the Program on Extremism at George Washington University. He points to the 2013 Boston Marathon bombing as a turning point. Investigators found clues that the attackers may have learned how to make a bomb from Inspire magazine, an online, English-language publication reportedly by the organization Al Qaeda.
“It became so there was less of a level of acceptance for general propaganda to be floating out there,” Mr. Hughes says.
In one initiative launched last year, the tech giants are teaming up to make it easier to spot terrorism-related content. Facebook, Microsoft, Twitter, and YouTube have developed channels to share information about such extremist content and accounts so that individual companies can find and take it down more quickly.
Whack-a-mole concerns
Still, some say that removing content might not actually be an effective approach to stem radicalization and recruitment by terrorist organizations.
One concern is that extremist content will simply move to other platforms.
“It’s sort of a whack-a-mole kind of problem,” says Eric Rosand, senior fellow in the Project on US Relations with the Islamic World at the Brookings Institution and director of The Prevention Project: Organizing Against Violent Extremism in Washington, D.C. “Terrorists will find another way to reach out with propaganda” if it’s removed.
That could mean moving onto smaller platforms with more encryption and less bandwidth to review and remove content.
This content could also be moved to the dark web, a section of the internet that is dense with encryption and challenging for intelligence officials to track. Sure, there’s a limited audience in the dark web, a detail which could reduce recruitment for organizations like ISIS, Hughes says, but those who do make it into the depths of the dark web are particularly dedicated.
And then there’s the question of where intelligence agencies can best keep tabs on extremists, Hughes says. “Is it better for these guys to be on the systems where we know we can [collect information on] them, we know who everyone is, but they can reach more people? Or is it better to push them off to the margins so they’re only talking to who they already were going to talk to to begin with?”
Counter-messaging
Some tech companies and government officials have been weighing alternative options to counteract extremist content. One idea is to harness the tools of the internet and social media to reach people in danger of being radicalized – in other words, use the same tools as ISIS in a sort of counter-messaging effort.
Google’s 2015 pilot project, the “Redirect Method,” tried to target the audience most susceptible to online recruitment and radicalization and, when they searched for certain terms, directed them toward existing YouTube videos that counter terrorists’ messages. The project used similar principles that businesses use to target ads to certain consumers.
Similarly, officials in the State Department’s Global Engagement Center have used paid ads on Facebook as a means of reaching out to young Muslims who may be targeted by extremist recruiters. The ads are for videos and messaging that counteract what they hear from jihadists.
But online content might not be as responsible for radicalizing terrorists as some politicians are implying, says Dr. Rosand of Brookings. “It’s as much about the offline networks, it’s as much about the grievances that drove them to violence, or made them very susceptible to violent messages, as they become radicalized.”
He suggests that politicians instead encourage tech companies to invest in communities by providing other alternatives to the path of terrorism. “How do you give them options, other than going online, to search for meaning in their lives? We don’t invest enough in that.”