Fears about robot overlords are (perhaps) premature

Computer science professor Melanie Mitchell clears up misconceptions about machine learning in “Artificial Intelligence: A Guide for Thinking Humans.”

“Artificial Intelligence: A Guide for Thinking Humans” by Melanie Mitchell, Farrar, Straus and Giroux, 324 pp.

Courtesy of Macmillan Publishers

October 25, 2019

In “Artificial Intelligence: A Guide for Thinking Humans,” Melanie Mitchell, a computer science professor at Portland State University, tells the story, one of many, of a graduate student who had seemingly trained a computer network to classify photographs according to whether they did or did not contain an animal. When the student looked more closely, however, he realized that the network was not recognizing animals but was instead putting images with blurry backgrounds in the “contains an animal” category. Why? The nature photos that the network had been trained on typically featured both an animal in focus in the foreground and a blurred background. The machine had discovered a correlation between animal photos and blurry backgrounds.

Mitchell notes that these types of misjudgments are not unusual in the field of AI. “The machine learns what it observes in the data rather than what you (the human) might observe,” she explains. “If there are statistical associations in the training data, even if irrelevant to the task at hand, the machine will happily learn those instead of what you wanted it to learn.”

Mitchell’s lucid, clear-eyed account of the state of AI – spanning its history, current status, and future prospects – returns again and again to the idea that computers simply aren’t like you and me. She opens the book by recounting a 2014 meeting on AI that she attended at Google’s world headquarters in Mountain View, California. She was accompanying her mentor, Douglas Hofstadter, a pioneer in the field who spoke passionately that day about his profound fear that Google’s great ambitions, from self-driving cars to speech recognition to computer-generated art, would turn human beings into “relics.” The author’s own, more measured view is that AI is not yet poised to be successful precisely because machines lack certain human qualities. Her belief is that without a good deal of decidedly human common sense, much of which is subconscious and intuitive, machines will fail to achieve human levels of performance.

Tracing fentanyl’s path into the US starts at this port. It doesn’t end there.

Many of the challenges of creating fully intelligent machines come down to the paradox, popular in AI research, that “easy things are hard.” Computers have famously vanquished human champions in chess and in Jeopardy, but they still have trouble, say, figuring out whether or not a given photo includes an animal. Machines are as yet incapable of generalizing, understanding cause and effect, or transferring knowledge from situation to situation – skills that we homo sapiens begin to develop in infancy.

These big themes are fascinating, and Mitchell conveys them clearly and lucidly. Along the way, she describes specific AI programs in technical language that can be challenging for the layperson (the many charts and illustrations are helpful). She lightens the book, though, with an affable tone, even throwing in the occasional “Star Trek” joke. She also writes with admirable frankness. Posing the question “Will AI result in massive unemployment for humans?” she answers, “I don’t know.” (She adds that her guess is that it will not.) She predicts that AI will not master speech recognition until machines can actually understand what speakers are saying but then acknowledges that she’s “been wrong before.”

While she’s an AI booster, Mitchell expresses a number of concerns about future implementations of the technology. Recent advances in AI accompanied the growth of the Internet and the related explosion in data. The field is currently dominated by deep learning, which involves networks training themselves by consuming vast amounts of data, and the author warns that “there is a lot to worry about regarding the potential for dangerous and unethical uses of algorithms and data.” She also points out that AI systems are easily tricked, making them vulnerable to hackers, which could have disastrous consequences where technologies like self-driving cars are concerned. Finally, Mitchell worries about the social biases that can be reproduced in AI programs; for instance, facial recognition technology is significantly more likely to produce errors when the subjects are people of color.

The author does an excellent job establishing that machines are not close to demonstrating humanlike intelligence, and many readers will be reassured to know that we will not soon have to bow down to our computer overlords. It’s almost a surprise, then, when Mitchell at the end of the book aligns herself with other researchers “trying to imbue computers with commonsense knowledge and to give them humanlike abilities for abstraction and analogy making,” what she’s identified as the missing piece to creating superintelligent machines. While computers won’t surpass humans anytime soon, not everyone will be convinced that the effort to help them along is a good idea.