Think computers are less biased than people? Think again.

In September, government officials gathered with AI scientists, entrepreneurs at the 2018 World Artificial Intelligence Conference in Shanghai. AI-driven tech is cropping up in nearly every major industry, from policing and health care to insurance and investment banking.

Aly Song/Reuters

October 3, 2018

From smart trash bins to crime forecasting, artificial intelligence is creeping into our lives in ways we might not even notice.

“Whether you are a resident involved in city programming or just a tourist traveling in a city, a lot of city programs and the ways you interact with municipalities are with AI,” says Rashida Richardson, director of policy research at AI Now Institute, a interdisciplinary research center at New York University studying the social implications of artificial intelligence.

In fact, local municipalities will increase their investment in AI-driven technology to more than $81 billion globally in 2018, according to IDC’s “Worldwide Semiannual Smart Cities Spending Guide.” Municipal spending on AI-driven technology is expected to grow to $158 billion in 2022, the study finds. This technology can range from smart trash bins that transmit a wireless signal to garbage collectors when the bins are full to real-time crime centers that provide instant information to police officers to help identify and stop emerging crime.

Why We Wrote This

Artificial intelligence is often billed as the answer to biased decisionmaking. But as long as people write that code, humans will have to wrestle with their own biases.

The explosion of AI-driven technology has been a boon for cash-strapped cities and towns interested in boosting services while tightening budgets. At its best, AI removes a degree of subjectivity from decisionmaking. But artificially intelligent systems are built by people. And embedded in the code for these systems lie some very human limitations. The issue, says Ms. Richardson, is that often the public doesn’t know what data is being used to make these decisions or even that data is driving these decisions, she says.

In some cases, AI is being asked to make increasingly complex decisions that can significantly impact someone’s life, such as deciding if someone qualifies for Medicaid or forecasting who might commit a crime. Yet, most municipalities lack the technical expertise to understand how the technology actually works or to determine if the algorithm is biased, Richardson says.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

Sometimes there are inexpensive, low-tech solutions that might work better. Take the example of AI determining whether a defendant is a pretrial flight risk. If the goal is to make sure a defendant arrives for his or her court date, AI isn’t very effective in achieving that specific outcome, Richardson says.

“There are cheaper methods available, such as texting someone to remind them to appear in court or making people aware of the consequences of not appearing in court,” she says.

The data isn’t always accurate

AI’s ability to predict an outcome is only as accurate as the data it’s modeled on. An algorithm is a series of steps that have a predetermined outcome, Richardson explains. The data could have an error or flaw in it that was created by the developers, and if that flaw isn’t identified or if there isn’t an awareness of the error, then that mistake will be perpetuated each time the algorithm is used.

AI isn’t immune to cognitive biases that can sway decisions, according to a white paper by a group of scientists from the Czech Republic and Germany. Biases such as “confirmation bias” (accepting a result because it confirms a belief) or “availability bias” (giving preference to information and events that are more recent and memorable) can become part of the algorithm, the team finds. For instance, a data scientist developing the algorithm may select data that supports his or her hypothesis and disregard data that confirms an opposite conclusion.

Outcomes also can be biased if the data isn’t based on diverse experiences, says Pradeep Ravikumar, an associate professor in the machine learning department at the School of Computer Science at Carnegie Mellon University in Pittsburgh. If, for example, the AI assistant in a municipality’s office of community and human services isn’t asking question tailored to a diverse population, the outcomes could be biased, he says.

Howard University hoped to make history. Now it’s ready for a different role.

Yet, Professor Ravikumar believes that as long as data scientists developing the algorithm understand the social issues at stake and the people using the technology understand how it works, then AI has the potential to make decisions that are less biased than the decisions humans would make.

You can examine AI to see if it’s biased, he says. You can look at what drove a decision and see what needs to be changed for the technology to make a different decision.

AI requires human oversight

However, AI decisions are rarely questioned, Richardson says. “The problem with the government use of these systems is there is a false sense of objectivity,” she says.

AI systems don’t always come under the same level of scrutiny as a person would if they were making these decisions.

“Human oversight is critical in deploying AI,” says Adelaide O’Brien, research director of government digital transformation strategies for IDC Government Insights, a market intelligence firm based in Framingham, Mass.

Government officials need to review AI recommendations and subject the algorithms to formal performance reviews just as they would subject humans to a formal performance review, she says. There also needs to be a clear plan for addressing errors and perceived privacy violations, she adds.

Yet, it’s not just our local governments using AI to make decisions. Corporations and banks are using AI to decide who gets hired, who gets a loan, and whether you qualify for insurance, says Cathy O’Neil, author of “Weapons of Math Destruction,” which looks at the way big data increases inequality and threatens democracy.

“Any time we apply to jobs, our resumes and applications are fed through algorithms which filter out most applications,” Dr. O'Neil writes in an email. “The same [is true] for applications for credit cards, loans or insurance. We have no information about how these scoring systems work, whether they have the right data about us, or any way to appeal a bad score (which we don’t even hear about directly).”

Transparency is essential to preventing bias, says Jouni Harjumäki, a graduate student researcher at the University of Helsinki in Finland who is studying ways to prevent discrimination in AI use. Policymakers and legislators need to engage in this discussion as well, he says, otherwise there is no legal obligation or need for companies to be transparent about the way their algorithm makes decisions.

O’Neil agrees that AI needs to be regulated. Algorithms should be tested based on a well-defined, publicly available definition of fairness, she says. “At the end of the day these systems choose the lucky from the unlucky, and it’s a system built by the lucky.”

One way to lessen the consequences of using AI is to let the public know when the technology is being used to make decisions that can affect their ability to get a loan, qualify for health benefits, or even be eligible to post bail after an arrest. Municipal governments should create a database or public listing of the types of decision that are being made by AI to bring more public awareness to its use, Richardson recommends.

“The question,” Richardson says, “is how to mitigate bias because there is no way to prevent it.”