UK’s fight against far-right hate goes online, but does it go too far?

|
Hollie Adams/Reuters
Demonstrators clash with police officers during an anti-immigration protest, in Rotherham, England, Aug. 4, 2024. Rioters rallied and organized their efforts in large part online.
  • Quick Read
  • Deep Read ( 5 Min. )

The persistence of far-right social media groups is a continual source of worry for the U.K. government. Earlier this month, they fanned misinformation about deadly stabbings in Southport, England, into countrywide Islamophobic, anti-migrant riots.

One tool the government is contemplating using to subdue this threat is the Online Safety Act. It requires social media platforms to remove posts containing “illegal material” under U.K. law, such as threats or hate speech. When the act comes into force next year, companies that don’t comply will face fines of up to £18 million or 10% of their global turnover – whichever is higher.

Why We Wrote This

A story focused on

To stop far-right rioting, the United Kingdom is looking to stamp out the sort of online activity that fostered the violence earlier this month. But the legislation that the government might use is under fire for being both too weak and overboard.

The bill is viewed by leaders dealing with the real-life consequences of online hate and disinformation as a silver bullet to curb the threat of future violence. But it has also been strongly criticized by all sides. Human rights groups warn that the act threatens user privacy and harms free speech. Others believe it does not go far enough.

“I certainly think at the moment there is not enough regulation,” says Isobel Ingham-Barrow, CEO of a think tank specializing in British Muslim communities. “But it has to be specific. ... You have to balance freedom of speech.”

When a wave of Islamophobic, anti-migrant violence swept the United Kingdom at the start of August, far-right social media groups fanned the flames of rioters’ anger. Today, the same channels continue to operate – this time tracing the riots’ aftermath.

“The regime is cracking down on patriots,” a poster on one far-right channel bemoaned, citing the case of a woman who was sentenced to 15 months in jail for telling her local community’s Facebook group, “Don’t protect the mosques. Blow the mosque up with adults in it.”

The persistence of such groups is a continual source of worry for the U.K. government, which is searching for new ways to tackle online extremism. One potential tool is new legislation passed last year with the power to police social media, known as the Online Safety Act.

Why We Wrote This

A story focused on

To stop far-right rioting, the United Kingdom is looking to stamp out the sort of online activity that fostered the violence earlier this month. But the legislation that the government might use is under fire for being both too weak and overboard.

Though it doesn’t come into force until next year, the bill is viewed by leaders dealing with the real-life consequences of online hate and disinformation as a silver bullet to curb the threat of future violence. But the same legislation has also been strongly criticized by all sides. Human rights groups have repeatedly warned that the act threatens user privacy and harms free speech. Others, such as London Mayor Sadiq Khan, believe the law simply does not go far enough. The result sees the government forced to walk a difficult tightrope above the unknown.

“I certainly think at the moment there is not enough regulation,” says Isobel Ingham-Barrow, CEO of Community Policy Forum, an independent think tank specializing in the structural inequalities facing British Muslim communities. “But it has to be specific, and you have to be careful because it can work both ways: You have to balance freedom of speech.”

Keeping users safe in the U.K.

The white paper that would eventually become the Online Safety Bill was drafted by legislators in 2019. It initially examined ways that the government and companies could regulate content that wasn’t illegal but could pose a risk to users’ well-being – especially children’s.

The effort was timely: When the pandemic hit less than a year later, politicians were able to watch the rapid spread of such disinformation – especially how social media could supercharge its reach – firsthand.

Jaap Arriens /Sipa USA/AP
The Telegram messaging app is seen on an iPhone. Applications like Telegram run afoul of the U.K.'s Online Safety Act because they have end-to-end encryption, making the act's requirements that the platform scan user messages for illegal content impossible to fulfill.

But as months passed, the bill’s remit grew in a bid to fight an ever-sprawling list of potential digital harms. In its final form, the act contains more than 200 clauses. It requires social media platforms to remove posts containing “illegal material” under U.K. law, such as threats or hate speech. When the act comes into force, companies that don’t comply will face fines of up to £18 million or 10% of their global turnover – whichever is higher.

While some of its reforms have been widely welcomed – for example, the legislation will outlaw spreading deep fake and revenge pornography – others are highly divisive.

In one case, websites will be compelled to verify the age of their users in a bid to avoid showing inappropriate content to minors – a requirement that the likes of the Wikimedia Foundation have already said they will be unable to fulfill without violating their own rules on collecting user data.

Another much-discussed clause demands that platforms scan users’ messages for content such as child sexual abuse. As well as being seen as an attack on user privacy, such a requirement, many experts say, is all but impossible for end-to-end encrypted services such as WhatsApp.

Yet concern also remains that the law does not go far enough, particularly if it is to tackle extremist rhetoric. Originally, the act required platforms to remove content seen as “legal but harmful,” such as disinformation that posed a threat to public health or encouraged eating disorders, but this was eventually scrapped.

Some now believe it’s time to reexamine whether requirements could be used to tackle rumors such as those that started August’s far-right rioting. Unrest began to spiral when posts on X falsely claimed that a teenager who killed three children in Southport, England, was a Muslim migrant. The killer was later confirmed to be British-born, with no connection to Islam.

“I think what the government should do very quickly is check whether the Online Safety Act is fit for purpose,” Mayor Khan said in an interview with The Guardian.

Not enough regulation vs. too much

Yet for human rights groups and campaigners who have rallied against the Online Safety Act, such calls to use the law as a one-size-fits-all solution are a cause for concern.

Campaign groups fear that the law’s already all-encompassing remit will cause social media platforms to overmoderate. British media regulator Ofcom, which will be responsible for implementing the law, has not yet released its guidance on how it will judge “illegal content.” Neither does the U.K. have a written constitution outlining free speech protections. The current atmosphere is one of uncertainty.

“If you don’t define what ‘illegal content’ is tightly, companies will err on the side of caution,” says James Baker, campaigns and advocacy manager at Open Rights Group, which promotes privacy and speech online. Because if a platform wrongly leaves something up, it is penalized, but “there’s no punishment in the act for wrongly curtailing free speech.”

Even trying to gauge content’s legality from the existing law throws light on inconsistencies.

“When looking at cases of racial hatred, U.K. law protects against abusive, threatening, or insulting words or behavior. [But] in cases of religious hatred, victims are only protected from threatening words or behavior. There is a disparity in the thresholds for different types of hate,” says Ms. Ingham-Barrow. “The lack of clarity surrounding the definition of harms – far from making the U.K. ‘the safest place in the world to go online’ for Muslims, this bill will do little to protect Muslim communities from Islamophobic abuse online.”

Ultimately, experts stress that they will not be able to assess the full impact of the act until it comes into force next year. “It’s foolhardy to call for the law to be changed before we’ve seen it practiced,” says Mr. Baker.

But legislation designed to crack down on disinformation or hate speech is just one piece of a far larger puzzle.

“So much of what makes people susceptible to disinformation is about both the online and the offline world,” says Heidi Tworek, associate professor of history and public policy at the University of British Columbia in Vancouver. “It can depend on age, gender, race, your political leaning, the things that a platform’s algorithm shows you and the community you find. We need to go beyond just regulation, and recognize that disinformation has many online and offline causes.”

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to UK’s fight against far-right hate goes online, but does it go too far?
Read this article in
https://www.csmonitor.com/World/Europe/2024/0827/uk-social-media-online-hate-free-speech-privacy-riots
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe