Unilever's gambit reflects advertisers' role in cleaning digital 'swamp'
Loading...
Since their ascendance in the 2000s, Google and Facebook have emerged as rule-setters for how businesses and people interact online. The titans of search and social media have largely defined how ads and other corporate content would appear, where they would flow, and the metrics of online advertising success.
That’s starting to change. Companies are seeing that digital marketing can bring increasing opportunities but also reputational risks. Corporations have begun pushing back, often quietly.
On Monday, one top advertiser, Unilever, went public with its criticism, calling social media “little better than a swamp” and threatening to pull ads from platforms that leave children unprotected, create social division, or “promote anger or hate.” That comes a year after Procter & Gamble adjusted its own ad strategy, voicing similar concerns.
This pressure from the business world parallels the broader push for reform that has emerged since news reports and other investigations have unearthed examples of fake news, extremist material, and graphic content used to manipulate public discourse and sway elections. It’s a complex dynamic, a kind of three-way tug of war among the digital platforms, corporate advertisers, and the media.
The battle is mostly about revenues. But it also involves a clash of ideals.
Gradually, the ideals of techno-optimism – the faith that algorithms can replace human judgment and that society benefits the more information flows – are giving way to a more nuanced view that some information is better than other information and that some of it is not only repugnant, but downright dangerous to social cohesion.
“They’re cognizant of the problems,” says Jason Kint, chief executive of Digital Content Next, a trade group that represents many big entertainment and news organizations. “The technology, it appears, is actually allowing bad actors to amplify misinformation and garbage while at the same time squeezing out the economics of the companies that are actually accountable to consumer trust.”
Trust is a key driver for corporations pushing the social and search platforms to change.
“Fake news, racism, sexism, terrorists spreading messages of hate, toxic content directed at children – parts of the internet we have ended up with is a million miles from where we thought it would take us,” said Keith Weed, Unilever’s chief marketing and communications officer, in a speech Monday to internet advertisers. “This is a deep and systematic issue – an issue of trust that fundamentally threatens to undermine the relationship between consumers and brands.”
That risk of lost trust with customers threatens mainstream corporations, and the result is pressure on Google and Facebook to make big changes. Americans’ trust in social media and search engines has fallen 11 percentage points since last year, according to the 2018 Edelman Trust Barometer. By contrast, Americans’ trust in traditional and online media rose 5 percentage points.
How election heightened awareness
Change is likely to prove difficult for the digital platforms, for several reasons. First, it’s technically challenging to track down who’s really behind each post, as Facebook discovered when it investigated Russian use of its platform during the 2016 elections. Fortunately for them, the digital platforms don’t necessarily have to ban borderline offenders and confront charges of censorship, media specialists say. Platforms like Facebook may just need to ensure that that objectionable material doesn’t get promoted by the platforms’ algorithms.
That’s a technical issue, because the algorithms have until recently been geared to making money, not policing content. In 2016, for example, when Kellogg’s and Warby Parker were embarrassed by reports that their online ads were showing up on Breitbart, both companies said they had not intended to advertise on the controversial nationalist publication's site. They pointed instead to “retargeting ads,” the technology that allows a company’s ads to follow users to subsequent websites after they have clicked on the company's website.
It was the digital platforms’ lack of transparency about where ads are placed and who sees them that prompted Procter & Gamble’s public criticism last year. Consumer brands are extremely sensitive about the values that their brands are connected with.
“You want to be next to fitting content; it’s really important in media effectiveness,” says Angeline Scheinbaum, a consumer psychology expert at the University of Texas and editor of a new book, “The Dark Side of Social Media: A Consumer Psychology Perspective.” “Now more automated media buying has resulted in advertisers being horrified about where their ad is ending up.”
Fine-tuning algorithms, with profits at stake
The second and bigger difficulty is that changing their practices will likely cause Google and Facebook to lose ad revenue, after several years of huge profits by setting their own rules.
Although a Facebook executive commended Unilever's stand, and said the company would work to meet advertiser expectations, the social media giant faces financial pressures of its own.
Already, Facebook has seen a decline of 50 million hours in network use because of the company’s new push to increase the quality of interactions rather than the quantity, according to CEO Mark Zuckerberg. Translation: more posts about friends, fewer viral cat videos and fake news posts. Also worrying to Facebook: research firm EMarketer forecasts 2 million users under 25 will quit the social network this year.
On Thursday, Google unveiled an ad blocker for its Chrome web browser, a counterintuitive move from a company that makes the bulk of its money from targeted advertising. The initiative came from a Google-inspired collaboration with advertising and publishing executives aimed at removing online ads that people find most annoying. But in targeting a dozen ad formats, the move will affect revenues most heavily at companies other than Google, and some members of the coalition grumbled that the search giant had dominated the process, according to The Wall Street Journal.
Furthermore, Facebook reportedly successfully lobbied that it should be exempt from the new ad-blocking, and a pop-up ad maker got a partial exemption as well.
The two giants are not hurting financially. EMarketer expects Google and Facebook will capture two-thirds of US digital advertising this year.
'A murky grey area'
Many observers are cautiously optimistic that the three-way tug of war will be resolved.
“Until now, most sites and publishers have focused on cleaning up the illegal content, such as hate speech or pirated content,” Daniel Castro, vice president at the Information Technology and Innovation Foundation in Washington, writes in an email. “But there is a lot more content that is in a murky grey area. And here is where sites may decide the content should remain – lest removing it drive away users – but that they allow advertisers to distinguish what types of content they are willing to advertise near.”
The replacement of techno-optimist ideals with corporate values may not be the ultimate answer, however, if the history of previous media disruptions is any guide.
The rise of mass media more than a century ago unleashed yellow journalism. And the advent of television led to the rigging of network game shows in the 1950s.
“There is a recurring pattern of new media becoming overly commercialized and socially irresponsible,” Victor Pickard, a professor of communications at the University of Pennsylvania in Philadelphia, writes in an email. “Corporations and advertisers rein in these commercial excesses only when it becomes absolutely necessary, and usually to prevent a loss in profit. So more often it is public pressure, commercial imperatives, and the threat of government regulation that incentivizes corporate social responsibility.”
That public discourse will be needed again, Michelle Amazeen, a mass communications professor at Boston University, writes in an email. “What is profitable isn't always what's best for society… There [are] too many conflicting interests to leave it to corporations to regulate social media.”