UK’s fight against far-right hate goes online, but does it go too far?
Hollie Adams/Reuters
Manchester, England
When a wave of Islamophobic, anti-migrant violence swept the United Kingdom at the start of August, far-right social media groups fanned the flames of rioters’ anger. Today, the same channels continue to operate – this time tracing the riots’ aftermath.
“The regime is cracking down on patriots,” a poster on one far-right channel bemoaned, citing the case of a woman who was sentenced to 15 months in jail for telling her local community’s Facebook group, “Don’t protect the mosques. Blow the mosque up with adults in it.”
The persistence of such groups is a continual source of worry for the U.K. government, which is searching for new ways to tackle online extremism. One potential tool is new legislation passed last year with the power to police social media, known as the Online Safety Act.
Why We Wrote This
To stop far-right rioting, the United Kingdom is looking to stamp out the sort of online activity that fostered the violence earlier this month. But the legislation that the government might use is under fire for being both too weak and overboard.
Though it doesn’t come into force until next year, the bill is viewed by leaders dealing with the real-life consequences of online hate and disinformation as a silver bullet to curb the threat of future violence. But the same legislation has also been strongly criticized by all sides. Human rights groups have repeatedly warned that the act threatens user privacy and harms free speech. Others, such as London Mayor Sadiq Khan, believe the law simply does not go far enough. The result sees the government forced to walk a difficult tightrope above the unknown.
“I certainly think at the moment there is not enough regulation,” says Isobel Ingham-Barrow, CEO of Community Policy Forum, an independent think tank specializing in the structural inequalities facing British Muslim communities. “But it has to be specific, and you have to be careful because it can work both ways: You have to balance freedom of speech.”
Keeping users safe in the U.K.
The white paper that would eventually become the Online Safety Bill was drafted by legislators in 2019. It initially examined ways that the government and companies could regulate content that wasn’t illegal but could pose a risk to users’ well-being – especially children’s.
The effort was timely: When the pandemic hit less than a year later, politicians were able to watch the rapid spread of such disinformation – especially how social media could supercharge its reach – firsthand.
But as months passed, the bill’s remit grew in a bid to fight an ever-sprawling list of potential digital harms. In its final form, the act contains more than 200 clauses. It requires social media platforms to remove posts containing “illegal material” under U.K. law, such as threats or hate speech. When the act comes into force, companies that don’t comply will face fines of up to £18 million or 10% of their global turnover – whichever is higher.
While some of its reforms have been widely welcomed – for example, the legislation will outlaw spreading deep fake and revenge pornography – others are highly divisive.
In one case, websites will be compelled to verify the age of their users in a bid to avoid showing inappropriate content to minors – a requirement that the likes of the Wikimedia Foundation have already said they will be unable to fulfill without violating their own rules on collecting user data.
Another much-discussed clause demands that platforms scan users’ messages for content such as child sexual abuse. As well as being seen as an attack on user privacy, such a requirement, many experts say, is all but impossible for end-to-end encrypted services such as WhatsApp.
Yet concern also remains that the law does not go far enough, particularly if it is to tackle extremist rhetoric. Originally, the act required platforms to remove content seen as “legal but harmful,” such as disinformation that posed a threat to public health or encouraged eating disorders, but this was eventually scrapped.
Some now believe it’s time to reexamine whether requirements could be used to tackle rumors such as those that started August’s far-right rioting. Unrest began to spiral when posts on X falsely claimed that a teenager who killed three children in Southport, England, was a Muslim migrant. The killer was later confirmed to be British-born, with no connection to Islam.
“I think what the government should do very quickly is check whether the Online Safety Act is fit for purpose,” Mayor Khan said in an interview with The Guardian.
Not enough regulation vs. too much
Yet for human rights groups and campaigners who have rallied against the Online Safety Act, such calls to use the law as a one-size-fits-all solution are a cause for concern.
Campaign groups fear that the law’s already all-encompassing remit will cause social media platforms to overmoderate. British media regulator Ofcom, which will be responsible for implementing the law, has not yet released its guidance on how it will judge “illegal content.” Neither does the U.K. have a written constitution outlining free speech protections. The current atmosphere is one of uncertainty.
“If you don’t define what ‘illegal content’ is tightly, companies will err on the side of caution,” says James Baker, campaigns and advocacy manager at Open Rights Group, which promotes privacy and speech online. Because if a platform wrongly leaves something up, it is penalized, but “there’s no punishment in the act for wrongly curtailing free speech.”
Even trying to gauge content’s legality from the existing law throws light on inconsistencies.
“When looking at cases of racial hatred, U.K. law protects against abusive, threatening, or insulting words or behavior. [But] in cases of religious hatred, victims are only protected from threatening words or behavior. There is a disparity in the thresholds for different types of hate,” says Ms. Ingham-Barrow. “The lack of clarity surrounding the definition of harms – far from making the U.K. ‘the safest place in the world to go online’ for Muslims, this bill will do little to protect Muslim communities from Islamophobic abuse online.”
Ultimately, experts stress that they will not be able to assess the full impact of the act until it comes into force next year. “It’s foolhardy to call for the law to be changed before we’ve seen it practiced,” says Mr. Baker.
But legislation designed to crack down on disinformation or hate speech is just one piece of a far larger puzzle.
“So much of what makes people susceptible to disinformation is about both the online and the offline world,” says Heidi Tworek, associate professor of history and public policy at the University of British Columbia in Vancouver. “It can depend on age, gender, race, your political leaning, the things that a platform’s algorithm shows you and the community you find. We need to go beyond just regulation, and recognize that disinformation has many online and offline causes.”