Microsoft crafts new policy in effort to ban 'terrorist content'
Microsoft announced on Friday that it will ban content 'used to promote terrorist violence or recruit for terrorist groups' on most – but not all – of its platforms.
Vince Bucci/AP
In the wake of homegrown terrorist attacks in Paris, San Bernardino, and Brussels, Microsoft outlined new policies Friday to stymie “terrorist content” on its consumer services.
The company said in a blog post it will ban content that is “used to promote terrorist violence or recruit for terrorist groups” from its consumer services such as Xbox Live and Outlook email. But it also added it will not extend this ban to its Bing search engine. Instead, it will only remove content from the search engine that violates the law.
Microsoft’s announcement underlines the conflict social media and technology companies experience while seeking to assist authorities to stop terrorism while not censoring speech.
“We have a responsibility to run our various Internet services so that they are a tool to empower people, not to contribute, however indirectly, to terrible acts,” writes Microsoft. “We also have a responsibility to run our services in a way that respects timeless values such as privacy, freedom of expression and the right to access information.”
Microsoft already prohibits on its consumer services hate speech or any support of violence. Its new guidelines add “terrorist content” – that “depicts graphic violence, encourages violent action, endorses a terrorist organization or its acts, or encourages people to join such groups” posted by or in support of groups listed on the Consolidated United Nations Security Council Sanctions List – to these restrictions. The UN's sanctions list includes the Islamic State group, as well as al Qaeda and its Somalian affiliate Al-Shabaab.
Once Microsoft receives a report of terrorist content, it will remove it. But, it will not apply these guidelines to Bing.
“In the context of a tool for accessing information, we believe that societies, acting through their governments, ought to draw the line between free speech and limitations relating to particular types of content,” writes Microsoft. “We will remove links to terrorist-related content from Bing only when that takedown is required of search providers under local law.”
Over the last two or three years, officials have become more and more concerned about individuals becoming radicalized online.
“Aspiring fanatics can receive updates from hardcore extremists on the ground in Syria via Twitter, watch [IS] bloodlust on YouTube, view jihadi selfies on Instagram, read religious justifications for murder on JustPasteIt, and find travel guides to the battlefield on Ask.fm,” Homeland Security Committee Chairman Rep. Michael McCaul (R) of Texas said during a Congressional hearing last year.
“Jihadi recruiters are mastering the ability to monitor, and prey upon, Western youth susceptible to the twisted message of Islamist terror,” the congressman said. “They seek out curious users who have questions about Islam or want to know what life is like in the so-called Islamic State.”
In one sense, Microsoft joins Facebook and Twitter in restricting content that could lead to radicalization. Twitter, for instance, announced in February it suspended the previous year about 125,000 accounts involved in terrorist activity.
In its refusal to censor Bing, though, Microsoft joins the likes of Apple in its hesitation to overstep individual rights and freedom of expression in the name of public safety. The conflict reached a head after the San Bernardino terrorist attack, when Apple refused to unlock an iPhone belonging to one of the terrorists.
Following the attack, the Senate introduced a bill that would require social media companies to notify authorities of terrorist activity.
Emma Llansó, director the Free Expression Project at the Center for Democracy and Technology, voiced her concerns about the bill to the Monitor’s Passcode.
The overly broad proposal, Ms. Llansó said, could lead to "massive over-reporting by companies who are trying to make sure they are not going to incur any legal liability for failing to report – meaning thousands of Americans will be reported to the government under the banner of association with terrorist activity, including potentially private communications that the government wouldn’t ordinarily just be able to demand the companies hand over."
Llansó said the bill could also have the opposite effect, and inadvertently discourage companies from taking steps to report potential terrorist activity. "If they don’t review content that gets reported to them, or take a look at something that’s been flagged, they can argue they don’t have actual knowledge and didn’t have obligation to make any sort of report."
This report contains material from Reuters.