Free speech vs. hate speech: How Reddit navigates the crosscurrents
Reddit has announced new measures to crackdown on abusive users. Will that reduce hate speech, or hinder expression on one of the internet's freest platforms?
Robert Galbraith/Reuters/File
After stumbling into a controversy involving harassing comments last week, Reddit has announced that it will punish users who engage in hateful, abusive speech on the platform, marking a move away from the largely uninhibited, and controversial, space for speech the site has cultivated.
Online communities and social media sites have struggled to define their responsibility in moderating speech and content on their platforms while also trying to create spaces for controversial opinions that have little chance to survive in mainstream media settings. While sites like Facebook have been criticized for heavy-handed censorship and the alleged promotion of certain viewpoints over others, other platforms such as Twitter have suffered the consequences of not curtailing online harassment on their sites.
Reddit, which began as a news aggregate site seeking to become the internet’s “front page,” has set a precedent for being a freer platform than most for anonymous communication, allowing its users to air controversial opinions and connect with one another. But just over a decade after the site’s launch, an unpredictable online sphere is forcing executives to grapple with the company’s identity.
“It’s a constant dilemma that we face on the internet,” Jessie Daniels, a psychology professor at Hunter College in New York who specializes in racism on the internet, tells The Christian Science Monitor. “Part of what the digital era has done is it’s raised new questions about the line between free speech and hate speech.”
And that line is an important one to define, as potentially unprotected speech dismissed as offensive, but meaningless, online utterances has the potential to spill over into the real world, leading to hate crimes and violence, she says.
“What you find at places like Reddit, is not what we would say about controversial ideas,” Dr. Daniles adds. “It’s about attacking people and attacking people in a really vicious way that often bleeds over into attacks in the material world and harming people.”
Last week, Reddit's chief executive officer, Steve Huffman, came under fire for editing users' comments that criticized him with abusive language, opting instead to replace his name with prominent members of a pro-Donald Trump subreddit. He said the idea started as a joke, and he hoped to show users how it feels when abusive posts are made involving their usernames. He later apologized, admitting the joke was done in bad taste and jeopardized the integrity and authenticity of speech on the site.
“I understand what I did has greater implications than my relationship with one community, and it is fair to raise the question of whether this erodes trust in Reddit,” he wrote in an announcement Wednesday. “I hope our transparency around this event is an indication that we take matters of trust seriously.”
Moving forward, Reddit plans to officially crackdown on the site’s “most toxic users,” and has already identified swathes of such accounts and taken initial action against them. These include warnings, timeouts, and permanent bans, Mr. Huffman said.
Daniels says those kind of approaches are some of the most effective ways to rid platforms of abusive users, but also noted that they’re largely labor intensive and expensive. Companies have tried to develop automated filters that censor obscene language, but human users have often outsmarted the algorithms, forcing sites that want to foster civil discourse to aggressively monitor posts.
Huffman’s announcement on the subreddit r/The_Donald said that posts within the group would no longer appear on the popular r/all listing where users from many different backgrounds engage.
“The sticky feature was designed for moderators to make announcements or highlight specific posts,” he wrote. “It was not meant to circumvent organic voting, which r/the_donald does to slingshot posts into r/all, often in a manner that is antagonistic to the rest of the community.”
Huffman’s “joke,” and his latest announcement, have received mixed reviews from both r/The_Donald members and users on the other end of the political spectrum. While some found the editing amusing, many said it hindered their trust in posts’ authenticity.
“Again, I am sorry for the trouble I have caused,” Huffman wrote. “While I intended no harm, that was not the result, and I hope these changes improve your experience on Reddit.”
But many more have applauded his latest efforts to ban harassment and spamming that goes on as a result of the uninhibited atmosphere on the site, and experts say such moves can help to redirect actual hate crimes committed in public.
In the wake of President-elect Trump’s unexpected election victory, more than 800 hate crimes were committed in nearly every state in just 10 days, according to an aggregated accounting compiled by the Southern Poverty Law Center. Many targeted minority groups or bore Trump’s name in graffiti, and some observers suggested that Trump’s harsh rhetoric may have spurred the acts.
“The conclusion that we can draw from that is that words have consequences,” Daniels says. “And I think that’s something that we really have to pay attention to.”
While some may think online threats levied by an anonymous user pose no danger to the person on the receiving end, the expression and normalization of such hatred can still lead to random acts of violence in the name of hate, Daniels says.
“The fact is that we know [hate speech online] creates an environment where some may read that and not get that it’s trolling and see it as a license to go out and assault someone,” she adds, noting that the issue becomes one of weighing free speech over protecting people from physical harm.
“That’s part of the human calculation as people who care about civil society and care about democracy: What is the balance between free speech, and speech that actually harms people?”