How ads hijacked the dream of the internet. Can digital citizens fight back?

A traveler pauses to check his phone and computer as he waits for his flight at Indianapolis International Airport. The dawn of the internet brought promises of an egalitarian global forum. Today, many users are left wondering just what went wrong.

Michael Conroy/AP

December 6, 2018

In 2015, Anastasia Dedyukhina, then a client director for a London digital ad agency, found herself at the peak of her career but at a low point in her emotional life.

“Basically my job was to launch all these new tech products into the markets and convince people to use more technology,” she says. “Having said that, I don’t think I was managing my own devices very well.”

Like many smartphone owners, Dr. Dedyukhina, who holds a PhD in philology from Lomonosov Moscow State University, found herself habitually checking her phone, she says, for no reason. And her attention to her screen began to come at a cost.

Why We Wrote This

In the 1990s, Silicon Valley promised a global virtual community that would level hierarchies and empower individuals. How did that ideal morph into a habit-forming outrage machine that spies on us?

“I was very reactive and I was constantly feeling very tired,” Dedyukhina says. Except, she noticed, when she was traveling abroad without a data plan. 

“I realized that I was feeling much lighter. I didn't feel that anxious,” she says. “I like to compare it to this feeling that you’re surrounded by 10 children of different ages, and they all pull you in different directions.”

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

As with many moral awakenings, the breaking point came when the ghosts started visiting. “I started feeling phantom vibrations, you know, when you have the sensation that your phone is ringing in your pocket, and you don’t even have a pocket.”

Now a coach, author, and public speaker, Dedyukhina runs a consulting company, Consciously Digital, that promotes what she calls “digital minimalism,” a practice that doesn’t promote abandoning technology altogether, but incorporates “time management,” “space management,” “relationship management,” and “self management.” Now, her smartphone sits in a drawer, switched off, without a SIM card, and is used only to summon the occasional Uber.

An infrastructure problem

That so many of us overuse our devices is no accident, says Dedyukhina, but rather the outcome of deliberate design choices made by tech companies that have been incentivized to promote habit-forming behavior.  

And she is far from alone in making this claim. Other critics include former Google design ethicist Tristan Harris, who warns of the subtle ways that technology “hijacks” our thinking; San Diego State University psychologist Jean Twenge, who argues that smartphones are pushing those born in the 1990s and later to “the brink of the worst mental-health crisis in decades;” Facebook’s first president, Sean Parker, who last year warned that the social network was “exploiting a vulnerability in human psychology” to “consume as much of your time and conscious attention as possible;” and University of North Carolina techno-sociologist Zeynep Tufekci, who in a TED talk last September said that “we're building this infrastructure of surveillance authoritarianism merely to get people to click on ads.”

In and around Silicon Valley, tech workers are taking pains to protect their children from the products they tout. The New York Times reported last month that childcare contracts drafted by parents in San Francisco and Cupertino are increasingly including demands that nannies hide phones, laptops, TVs, and all other screens from their kids.

Howard University hoped to make history. Now it’s ready for a different role.

These concerns over excessive screen time and its effects share one broad theme: Technology companies and those who provide content for them are doing everything they can to seize and commodify our attention, and it’s working.

What’s more, it’s working in ways that are causing social problems that go far beyond the distraction and isolation normally attributed to smartphones. To better target their users, tech companies gather data on their online behavior, which is then fed into algorithms that choose content aimed at keeping them engaged. And these algorithms are largely unconcerned whether the content is a cat video or a xenophobic conspiracy theory. At the same time, the ability to micro-target potential customers also enables governments to spy on their citizens and manipulate public opinion at home and abroad.

“It's not a content problem. It's an infrastructure problem,” says Nathalie Maréchal, a researcher at Ranking Digital Rights, a Washington, D.C., nonprofit that aims to set standards for how tech companies safeguard human rights. “We need to continue to build the consensus and build this understanding of the connection between targeted advertising and media manipulation, and, ultimately, fascism. Because make no mistake, that’s where this is heading if we don’t do something about it.”

Causing much of this dysfunction, Dr. Maréchal argued in a November essay for Motherboard, is the targeted-advertising business model, in which web publishers offer “free” content to users in exchange for behavioral data that gets passed on to advertisers.

“Targeted advertising,” she writes, “provides tools for political advertisers and propagandists to micro-segment audiences in ways that inhibit a common understanding of reality. This creates a perfect storm for authoritarian populists like Rodrigo Duterte, Donald Trump, and [Jair Bolsonaro] to seize power, with dire consequences for human rights.”

Chasing eyeballs

The era of targeted advertising began in 2000, just as the dot-com bubble was bursting. Facing pressure from investors to post a profit, the two-year-old search company Google turned to the vast stores of data that it had gathered from its users as they entered search terms and clicked on results. This data, Google discovered, could be used to predict users’ behavior with an accuracy and precision that previous generations of advertisers could only dream of. Just as the dot-com collapse was wiping out trillions of dollars, Google was turning a profit for the first time. Today, advertising accounts for 84 percent of the company’s revenue.

In the decades since then, targeted advertising has been honed to a fine edge. For instance, when you visit the pages of a typical online newspaper the site collects information about your browsing history, your location, and other demographic details, and sends it to an ad exchange, which submits your profile to advertisers. The advertisers then offer bids, typically cents or fractions of a cent, to show you an ad that has been selected for your profile. The whole bidding process happens automatically, in about a tenth of second, before the page loads.

Facebook has pushed the model to an even further extreme. Think of everything it knows about you even if you're not a particularly heavy user. Every status update; every reply; every like, heart, and angry emoji; every location that you’ve logged in from, Facebook stores it all. Even when you start to write a post, think better of it and delete it, Facebook keeps that, too.

The company even goes out of its way to acquire additional data on you that you don’t knowingly give it. In August, Facebook asked major banks such as JPMorgan Chase and Wells Fargo to hand over users’ financial data, including checking account balances and transaction histories. 

A trove of internal emails released Wednesday by Damian Collins, a member of the British Parliament, revealed that Facebook engineered ways to mine data from Android users without their permission. Those communications, gathered as part of an investigation into the company's role in spreading misinformation, revealed a quid-pro-quo system where developers who wished to connect their apps to the network must also agree to hand over user data to Facebook.

Even if you don’t have an account, Facebook is likely keeping tabs on you. Earlier this year chief executive Mark Zuckerberg told US Representative Ben Luján (D) of New Mexico that Facebook collects “data of people who have not signed up for Facebook” for security reasons. “This kind of data collection is fundamental to how the internet works,” Facebook later told Reuters.

All of this data collection is in the service of delivering ads. Each individual profile is worth just pennies a day per user, but when you have more than 2 billion monthly users, as Facebook does, those pennies add up.

Profits from ads create a powerful incentive to maximize user engagement, or “chase eyeballs,” in the parlance of online publishing. For news outlets, that creates a pressure to prioritize content that is viral over that which is trustworthy or in the public interest.

For platforms like Facebook and YouTube, it means hiring psychology postdocs to devise ever more ingenious ways to keep users glued to the screen. Design techniques include infinitely scrolling news feeds and video autoplay, features that, like a bottomless bowl of soup, subtly encourage users to consume more than they would otherwise.

Often, the algorithms promote ever more extreme content. In her TED talk, Professor Tufekci of UNC reports watching videos of Donald Trump rallies, only to have YouTube's “up next” algorithm set to autoplay videos promoting white supremacy. When she did the same for Hillary Clinton and  Bernie Sanders rallies, leftist-conspiracy videos streamed forth. “I once watched a video about vegetarianism on YouTube, and YouTube recommended and autoplayed a video on being vegan,” she told the audience. “It's like you’re never hardcore enough for YouTube."

In November, the online magazine The Intercept reported that Facebook’s algorithm had automatically generated the category “people who have expressed an interest or like pages related to White genocide conspiracy theory,” as one of its targeting options for advertisers. The group had 168,000 members.

A way forward?

Even some of Silicon Valley’s biggest promoters acknowledge the sway our devices hold over our thinking. “We don’t let our young son get near the phone” says Rep. Ro Khanna (D) of California, whose district includes the tech giants Apple, Intel, and eBay. “These engineers, they were very clever, they designed programs in a way that’s designed to maximize eyeballs on the screen.”

“We probably need to take a step back,” he continues, “whether it’s social media or whether it is addiction, and have people ask these ethical questions and ask about what the ethical responsibilities are and have tech leaders participate in the solutions.”

Yet Representative Khanna already sees some alternative business models emerging. “I don’t know if it will succeed in the marketplace, but if there’s enough interest in saying that we don't want to be bombarded with ads, you can move more to a subscription model,” he says “I think you may see some of the folks come that way.”

Already, there are signs of a shift under way, driven in part by falling ad rates. A growing number of news outlets are erecting paywalls and trading advertisers for subscribers, reasoning that a direct relationship with their readers would better serve the bottom line in the long run.

In September, Apple implemented screen-time controls in its iOS 12 release, in response to pressure from investors to address smartphone overuse. Now, iPhone owners can monitor how much they’re using their devices and set screen-free times and time limits on individual apps. That same month Twitter announced that it would begin allowing users to switch off its algorithmic timeline and view tweets chronologically. Similarly, this summer Instagram introduced its “you’re all caught up” notification to prevent mindless browsing.

For Nir Eyal, the author of the influential 2014 book “Hooked: How to Build Habit-Forming Products,” these shifts are signs that the market is working.

“They are responding to customer feedback,” says Mr. Eyal. “They make the products safer. They make it better.”

While Eyal acknowledges that children should be protected from psychological manipulation by tech companies, he doubts that, except for a small number of genuine internet addicts, most people can’t put down their phones whenever they want. “We can't perpetuate this message that there’s nothing people can do,” he says.  “We are giving these companies more power and more control than they deserve.”

Dedyukhina argues that shifting our behaviors around smartphones will require a broader cultural shift. “I don't think that we should be relying just on the tech companies,” she says. “Maybe, working together, we can get to the point when checking your phone in front of other people will stop being cool.”