How Google plans to fight extremism through search advertising

A new program will provide advertising grants to anti-radicalization organizations, making their content more visible when people search Google for violent or extremist material.

A man walks by a building on Google's campus in Mountain View, Calif., on November 12, 2015.

Jeff Chiu/AP/File

February 3, 2016

When would-be terrorists go to Google to find violent propaganda or information about bomb-making, they may instead find a counter-narrative of moderation and non-aggression.

A new Google program outlined at a committee hearing in the British Parliament aims to counteract extremist material online by increasing the visibility of anti-radical organizations in search results.

“When people put potentially damaging search terms into our search engine they [will] also find these counter narratives,” Anthony House, Google’s Head of Policy Strategy for the EU, told members of Parliament.

Howard University hoped to make history. Now it’s ready for a different role.

The pilot program would give up to $10,000 per month in advertising credits to non-governmental organizations that promote anti-radicalization. It would be a part of the existing Google AdWords Grants program, which gives credit to charities to help promote their mission and recruit volunteers. The credits will allow organizations fighting radicalization to purchase ad placement at the top of Google search results pages, making their message the first one people see when they search for terms related to terrorism or violent extremism.

The pilot program will not alter Google’s search results or page ranking algorithm to nudge people away from violent content or propaganda; instead, it will give anti-radicalization organizations a way to promote their content in search results in order to counteract extremist material.

“It’s extremely important that people are able to find good information, that when people are feeling isolated, that when they go online, they find a community of hope, not a community of harm,” Mr. House told members of Parliament.

House also said that Google will try to make anti-radicalization videos easier to discover on YouTube. Last year, the company removed more than 14 million videos from YouTube for reasons ranging from copyright infringement to terrorist content. YouTube members also flagged more than 100,000 videos as being inappropriate content.

Representatives from Twitter and Facebook also attended the Parliament committee hearing. Twitter’s UK Public Policy Manager Nick Pickles told Parliament members that the social network has a team of more than 100 employees dedicated to removing violent, extremist, or otherwise inappropriate content from the site. In 2014, Twitter removed tens of thousands of accounts that were spreading extremist content, Mr. Pickles said. Facebook executive Simon Milner said that the social network was a “hostile place” for extremist organizations, and that Facebook was dedicated to keeping users safe by removing content promoting violence.