Facebook's secret experiment on users had a touch of 'Inception'
Facebook secretly tweaked some users' news feeds as part of an experiment on 'emotional contagion.' The results were fascinating, but raise questions about online ethics.
Dado Ruvic/Reuters/File
In the popular 2010 sci-fi movie “Inception,” a crew of tech-savvy geniuses sneak into a person’s subconscious dreamscape, “incept” an action-inducing emotion, and then watch as that person makes a wide-awake “free choice” based on the power of that surreptitiously implanted suggestion.
Like fictional tales from “Frankenstein” to “The Matrix,” the film tweaked the kind of cultural fears often evoked by the thrilling power of human technology, and the creepy unintended consequences it might possibly produce.
So when Facebook revealed last week that it had conducted a secret experiment with its user’s emotions, gauging the “emotional contagion” of its algorithmic, personally-tailored news feed, a chorus a critics cried foul, saying the social media behemoth was messing with people’s minds without their knowledge.
For a week in 2012, a trio of scientists were allowed to tinker with the Facebook algorithms of nearly 700,000 users, a new study has revealed, and they measured whether a mostly positive or mostly negative news feed could then influence the emotional tenor of the users’ own posts.
The data suggested that it did. “We show, via a massive ... experiment on Facebook, that emotional states can be transferred to others via emotional contagion, leading people to experience the same emotions without their awareness,” wrote the study’s authors, scientists from Cornell University, the University of California at San Francisco, and Facebook’s own Core Data Science Team.
It’s another digital age jolt to the modern psyche, sparking a host of questions about privacy and the ethics of online behavior.
As news of the study broke, the Twitterverse was awash with keywords such as “super disturbing,” “creepy,” and “evil” to describe the experiment on unsuspecting users. Many also began to discuss the ethics of even conducting such an experiment, asking: Just how did such research institutions approve this secret “manipulation” of user emotions?
But at an even deeper level, ethicists have begun to wonder of about the moral landscape of the ever-ubiquitous online world.
The concept of “emotional contagion,” with its virus-like connotations, has been studied for decades by scientists, who observe how the emotions and behaviors of human beings naturally, and even unconsciously, get in sync with those around them. It’s an important natural process for any social creature, behavioral theorists say.
“What I think is startling about this, is that it calls into question our individualist assumptions about ethics,” says Evan Selinger, a fellow at the Institute for Ethics and Emerging Technologies in Hartford, Conn. “So there may no intention on our part to influence other people, but what this kind of behavior research has shown, that we’re doing so all the time.”
To be clear, Facebook didn’t put anything extra into users’ news feed. The researchers simply reduced “negative” headlines from one sample of users – pulling from 10 to 90 percent of headlines with keywords they deemed a bummer – while similarly reducing “positive” headlines with keywords they deemed more uplifting from another sample.
“We provide experimental evidence that emotional contagion occurs without direct interaction between people,” the study concluded, showing that users in each sample began posting more positive or negative posts respectively, based on the content of their news feed.
“So if you start posting in a certain way, does that affect how other people behave?” says Dr. Selinger, a professor of philosophy at the Rochester Institute of Technology in Rochester, N.Y. “That to me is an interesting ethical question, sort of my brother’s keeper sort of thing.”
And it also speaks to the emotion- and behavior-shaping power the Facebook news feed may wield. It is viewed by some 130 million Americans who sign on to Facebook each day.
The presence of happy, smiling faces is vital in creating the positive vibes that might convince a customer to buy, and then to come back to buy again, as any retailer knows.
But this becomes a powerful social force in the online era, critics contend. Of course, every social media user agrees to give up a measure of privacy to participate in the digital age. Tech companies’ algorithms already monitor our e-mails, browsing habits, and even physical movements as they construct digital profiles for targeted marketing efforts.
But should there be any moral or ethical constraints on the decisions that fill a news feed?
Earlier this month, before the study came out, Bianca Bosker, senior tech editor at The Huffington Post, asked a similar moral question in her post, “Should Online Ads Really Offer Binge Drinkers A Booze Discount?”
“Will online advertisements adopt a moral code?” she wrote. “As they get more insight into our peccadillos, weak spots, indulgences and addictions, should the Facebooks and Googles of the world limit marketers from pushing products that make us behave badly or cause harm? And who’d decide what 'bad' looks like?”
Given this power to shape emotion and behavior, society has to come to grips with their moral implications, ethicists say.
“Everybody wants to tap into that power, to move collective action in certain ways,” says Selinger. “I think our theories of collective action, and our ethics of collective action, might not have caught up with some of these behavior insights.”