Facebook flounders as referee in gray area of online comments

The social media giant finds itself on a slippery slope determining which types of comments and stories should be banned on its site. As a work around, Facebook might label disputed stories as such and show content offering a different point of view.

|
Marcio Jose Sanchez/AP
Facebook CEO Mark Zuckerberg delivers the keynote address at F8, Facebook's developer conference in San Jose, Calif on May 1, 2018.

Denying the Holocaust happened is probably OK on Facebook. Calling for a mob to kill Jews is not.

Mark Zuckerberg's awkward and eyebrow-raising attempt this week to explain where Facebook draws the line illustrates the complexities social media platforms face as they take on the unwanted role of referee in this age of online misinformation, manipulation and hate speech.

Facebook, with 2.2 billion users, disallows such things as nudity, the selling of guns, credible threats of violence, and direct attacks on people because of their race, sex or sexual orientation.

Hours after the Facebook founder's comments about Holocaust deniers aired on Wednesday, the company announced it will also start removing misinformation that could lead to bloodshed. The policy will begin in Sri Lanka and expand to Myanmar, where Facebook users have been accused of inciting anti-Muslim violence.

But beyond those guidelines, there are large gray areas. What, exactly, qualifies as supporting terrorist groups versus merely posting about them? Or mocking someone's premature death – something that is also prohibited?

If Facebook were to ban Holocaust denial, it might also be called on to prohibit the denial of other historical events, such as the Armenian genocide or the massacre of Native Americans by European colonizers. This, Facebook might argue, could lead to a slippery slope where the company finds itself trying to verify the historical accuracy of users' posts.

So, where it can, Facebook stays out of policing content.

While thousands of Facebook moderators around the world are assigned to review potentially objectionable content, aided by artificial intelligence, executives like to say the company doesn't want to become an "arbiter of truth" and instead tries to let users decide for themselves.

This is why fake news isn't actually banned from Facebook, though you might see less of it these days thanks to the company's algorithms and third-party fact-checking efforts. Instead, Facebook might label disputed news stories as such and show you related content that might change your mind.

YouTube recently started doing this, too. Twitter has been even more freewheeling in what sorts of content it allows, only recently ramping up a crackdown on hate and abuse.

"Facebook doesn't want to put time and resources into policing content. It's costly and difficult," said Steve Jones a professor of communications at the University of Illinois at Chicago. "It's a difficult job, I'm sure an emotionally draining job, and given the scale of Facebook, it would take a lot of people to monitor what goes through that platform."

At the same time, Jones said he has his doubts that throwing more moderators (Facebook's goal is to increase the number from 10,000 to 20,000 this year) and more technology at the problem would make a difference. He said he has no idea how Facebook can fix things.

"If I knew," he said, "I'd probably be sitting next to Mr. Zuckerberg asking for a big fat check."

Why these companies try to stay out of regulating speech goes back to their roots. They were all founded by engineers as tech companies that shun labels such as "media" and "editor." Facebook's chief operating officer, Sheryl Sandberg, even said in an interview last year that, as a tech company, Facebook hires engineers – not reporters and journalists.

Then there's the legal shield. While a newspaper can be held responsible for something printed on its pages, internet companies by law are not responsible for the content others post on their sites. If they start policing content too much – editing, if you will – tech companies risk becoming media companies.

Zeynep Tufekci, a prominent Techno-sociologist, said on Twitter that the notion that you can "fight bad speech with good speech" doesn't really work in a Facebook world if it ever did.

"Facebook is in over its head," she tweeted Thursday, but she also confessed that "nobody has a full answer."

In an interview with Recode, Mr. Zuckerberg, who is Jewish, said posts denying the Nazi annihilation of 6 million Jews took place would not necessarily be removed. Zuckerberg said that as long as posts are not calling for harm or violence, even offensive content should be protected.

While this has been a longstanding position at the company, Zuckerberg's statement and his reasoning – that he doesn't think Holocaust deniers are "intentionally" getting it wrong – caused an uproar.

The Anti-Defamation League said Facebook has a "moral and ethical obligation" not to allow people to disseminate Holocaust denial.

Zuckerberg later tried to explain his words, saying in an email to Recode's Kara Swisher that he personally finds "Holocaust denial deeply offensive, and I absolutely didn't intend to defend the intent of people who deny that."

Still, for now, the policy is not changing.

This story was reported by The Associated Press. 

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Facebook flounders as referee in gray area of online comments
Read this article in
https://www.csmonitor.com/Technology/2018/0720/Facebook-flounders-as-referee-in-gray-area-of-online-comments
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe