Can AI help Facebook stop discriminatory advertising?

Advertisers have grown accustomed to targeting ads to specific audiences. But the tech giant is hoping to crack down on the use of metadata to exclude minorities from offers of housing, employment, or credit.

A man walks past a mural in an office on the Facebook campus in Menlo Park, Calif., June 11, 2014. The tech giant announced on Wednesday that it will use machine learning in an effort to comb ads and prevent discrimination.

Jeff Chiu/AP/File

February 9, 2017

When faced with a challenge, what’s a tech company to do? Turn to technology, Facebook suggests.

Following criticism that its ad-approval process was failing to weed out discriminatory adsFacebook has revised its approach to advertising, the company announced on Wednesday. In addition to updating its policies about how advertisers can use data to target users, the social media giant plans to implement a high-tech solution: machine learning.

In recent years, artificial intelligence has climbed off the pages of science fiction novels and into myriad aspects of everyday life, from internet searches to health care decisions to traffic recommendations. But Facebook's new ad-approval algorithms wade into greener territory as the company attempts to utilize machine learning to address, or at least not contribute to, social discrimination.

Tracing fentanyl’s path into the US starts at this port. It doesn’t end there.

Machine learning has been around for half a century at least but we’re only now starting to use it to make a social difference,” Geoffrey Gordon, an associate professor in the Machine Learning Department at Carnegie Mellon University in Pittsburgh, Penn., tells The Christian Science Monitor in a phone interview. “It’s going to become increasingly important.”

Though analysts caution that machine learning has its limits, such an approach also carries tremendous potential for addressing these types of challenges. With that in mind, more companies – particularly in the tech sector – are likely to deploy similar techniques.

Facebook’s change of strategy, intended to make the platform more inclusive, follow the discovery that some of its ads were specifically excluding certain racial groups. In October, nonprofit investigative news site ProPublica tested the company’s ad approval process with an ad for a “renter event” that explicitly excluded African-Americans. The Fair Housing Act of 1968 prohibits discrimination or showing preference to anyone on the basis of race, making that ad illegal – but it was nevertheless approved within 15 minutes, ProPublica reported.

Why? Because while Facebook doesn't ask users to identify their race and bars advertisers from directing their content at specific races, they have a host of information about users on file: pages they like, what languages they use, and so on. This kind of information is important to advertisers, since it means they can improve their chances of making a sale by targeting their ads toward people who are more likely to buy their product. 

But by creating a demographic picture of a user, this data may make it possible to determine an individual’s race, and then improperly exclude or target individuals. The company's updated policies emphasize that advertisers cannot discriminate against users on the basis of personal attributes, which Facebook says include "race, ethnicity, color, national origin, religion, age, sex, sexual orientation, gender identity, family status, disability, medical or genetic condition." 

Why Florida and almost half of US states are enshrining a right to hunt and fish

There's a fine line between appropriate use of such information and discrimination, as Facebook’s head of US multicultural sales, Christian Martinez, explained following the ProPublica investigation: “a merchant selling hair care products that are designed for black women” will need to reach that constituency, while “an apartment building that won’t rent to black people or an employer that only hires men [could use the information for] negative exclusion.”

For Facebook, the challenge is maintaining that advertising advantage, while preventing discrimination, particularly where it’s illegal. That’s where machine learning comes in.

“We’re beginning to test new technology that leverages machine learning to help us identify ads that offer housing, employment or credit opportunities - the types of advertising stakeholders told us they were concerned about,” the company said in a statement on Wednesday.

The computer “is just looking for patterns in data that you supply to it,” explains Professor Gordon. 

That means Facebook can decide which areas it wants to focus on – namely, “ads that offer housing, employment or credit opportunities,” according to the company – and then supply hundreds of examples of these types of ads to a computer.

If a human “teaches” the computer by initially labeling each ad as discriminatory or nondiscriminatory, a computer can learn to go “from the text of the advertising to a prediction of whether it’s discriminatory or not,” Gordon says.

This kind of machine learning – known as “supervised learning” – already has dozens of applications, from determining which emails are spam to recognizing faces in a photo.

But there are certainly limits to its effectiveness, Gordon adds.

“You’re not going to do better than your source of information,” he explains. Teaching the machine to recognize discriminatory ads requires lots of examples of similar ads. 

“If the distribution of ads that you see changes, the machine learning might stop working,” Gordon explains, noting that these changing strategies on the part of content producers can often get them past AI filters, like your email spam filter. Insufficient understanding of details on the part of machines can also lead to high-profile problems, like Google Photos, which in 2015 mistakenly labeled black people as gorillas.

“Teaching” the machine also means having a person take the time to go through hundreds of ads and label them, as well as continue to check and correct a machine’s work. That makes the system vulnerable to human biases.

“That process of refinement involves sorting, labeling and tagging – which is difficult to do without using assumptions about ethnicity, gender, race, religion and the like,” explains Amy Webb, founder and CEO of the Future Today Institute, in an email to the Monitor. “The system learns through a process of real-time experimenting and testing, so once bias creeps in, it can be difficult to remove it.”

More overt bias issues have already been observed with AI bots, like Tay, Microsoft’s chatbot, who repeated the Nazi slogans fed to her by Twitter users. While this bias may be more subtle, since it is presumably unintentional, it could conceivably create persistent problems.

Unbiased machine learning “is the subject of a lot of current research,” says Gordon. One answer, he suggests, is having a lot of teachers, since it offers a consensus view of discrimination that may be less vulnerable to individual biases.

Since October, the company has been working with civil rights groups and government organizations to strengthen their nondiscrimination policies. Despite potential obstacles, those groups seem pleased with the progress that the AI system and associated steps represent.

“We ‘like’ Facebook for following up on its commitment to combatting discriminatory targeting in online advertisements,” Wade Henderson, president and chief executive officer of the Leadership Conference on Civil and Human Rights, said in a statement on Wednesday.

And machine learning is likely to become a component in other companies’ efforts to combat discrimination, as well as perform a host of other functions. Though he notes that tech companies are “typically fairly secretive” about their plans, Gordon suggests that such projects are probably already underway at many of them.

“Facebook isn’t the only company doing this – as far as I know, all of the tech companies are considering a similar ... question,” he concludes.

But is the ability to target advertising on social media platforms really worth the trouble? Professor Webb, who also teaches at the NYU School of Business, sounds a note of caution.

“My behavior in Facebook is not an accurate representation for who I really am, how I think, and how I act – and that’s true of most people,” she writes. “We sometimes like, comment and post authentically, but more often we’re revealing just the aspirational versions of ourselves. That may ultimately not be useful for would-be advertisers.”