Machines that learn: The origin story of artificial intelligence

AI may have had a slow start, but companies like Google and Facebook are pursuing it like a modern-day gold rush.

|
Penguin Random House
“Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World” by Cade Metz, Dutton, 384 pp.

Lee Sedol, a world champion in the Chinese strategy board game Go, faced a new kind of adversary at a 2016 match in Seoul.

Developers at DeepMind, an artificial intelligence startup acquired by Google, had fed 30 million Go moves into a deep neural network. Their creation, dubbed AlphaGo, then figured out which moves worked by playing millions of games against itself, learning at a faster rate than any human ever could.

The match, which AlphaGo won 4 to 1, “was the moment when the new movement in artificial intelligence exploded into the public consciousness,” technology journalist Cade Metz writes in his engaging new book, “Genius Makers: The Mavericks Who Brought AI to Google, Facebook, and the World.”

Metz, who covers AI for The New York Times and previously wrote for Wired magazine, is well positioned to chart the decades-long effort to build artificially intelligent machines. His straightforward writing perfectly translates industry jargon for technologically un-savvy readers (like me) who might be unfamiliar with what it means for a machine to engage in “deep learning” or master tasks through its own experiences.

Metz chronicles the mad 21st-century gold rush of AI, in which American frontrunner Google has competed domestically (against rivals like Facebook and Microsoft) as well as internationally (against Chinese competitors like Baidu). Each company has spent billions on research, gobbled up startups, and attempted to lure a small pool of talent with the kind of money and urgency usually associated with top NFL prospects. China has announced plans to be the world leader in AI by 2030.

But it wasn’t always so frenetic. Metz shares the origins of AI in 1958, when a Cornell professor successfully taught a computer to learn. The machine was as wide as a kitchen refrigerator, and was fed cards marked with small squares on either the left or right sides. After reading about 50 of them, it began to correctly identify which cards were which – thanks to programming based on the human brain.

Overhyped expectations exceeded the technology of the era, and the study of so-called neural networks capable of replicating human intelligence remained largely fallow in subsequent decades. Even so, by 1991 the technology had advanced to a point that a machine could learn to identify connections on a family tree or drive a Chevy from Pittsburgh to Erie, Pennsylvania.

As interest in AI waxed and waned, early progress in the field came from just a handful of scientists. Metz focuses on Geoffrey Hinton, a British-born Canadian scientist who sold his startup to Google and subsequently won the Turing Award – the Nobel Prize of computing.

Metz’s description of Hinton and the many graduate students he trained at the University of Toronto is a window into the work of a genius. Another pioneer, Demis Hassabis, a co-founder of DeepMind, started off creating computer games before setting out to build “artificial general intelligence” capable of doing anything the human brain could do.

Artificial intelligence, which is still in its infancy, has already remade speech and image recognition and is helping Big Tech companies predict what words you’ll type in an email or what ads you’ll click on next.

The potential is immense. So are the risks, and Metz touches on some of the pitfalls that have already emerged.

For one thing, AI’s output is only as good as the information used to train it. Big Tech companies have, in many instances, relied disproportionately on photos of white men to train photo-recognition tools. That practice led to the awful moment in 2015 when Google Photos labeled pictures of Black people as “gorillas.”

And the same technology that helps self-driving cars identify pedestrians might also help make drone strikes more accurate. Google faced an internal revolt over plans to work with the U.S. Department of Defense.

Metz cites New York Times reporting that the Chinese government worked with AI companies to build facial-recognition technology that could help track and control its minority Uighur population.

For the most part, though, Metz focuses less on the ethics of AI – and its potentially troubling future applications – than he does on how researchers got to the present moment.

Perhaps there will come a day when artificially intelligent robots can read Metz’s book as a history of the baby steps that got them there: viewing millions of YouTube videos to learn how to recognize cats and mastering games like Go.

Seth Stern is an editor at Bloomberg Industry Group.

You've read  of  free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.

Dear Reader,

About a year ago, I happened upon this statement about the Monitor in the Harvard Business Review – under the charming heading of “do things that don’t interest you”:

“Many things that end up” being meaningful, writes social scientist Joseph Grenny, “have come from conference workshops, articles, or online videos that began as a chore and ended with an insight. My work in Kenya, for example, was heavily influenced by a Christian Science Monitor article I had forced myself to read 10 years earlier. Sometimes, we call things ‘boring’ simply because they lie outside the box we are currently in.”

If you were to come up with a punchline to a joke about the Monitor, that would probably be it. We’re seen as being global, fair, insightful, and perhaps a bit too earnest. We’re the bran muffin of journalism.

But you know what? We change lives. And I’m going to argue that we change lives precisely because we force open that too-small box that most human beings think they live in.

The Monitor is a peculiar little publication that’s hard for the world to figure out. We’re run by a church, but we’re not only for church members and we’re not about converting people. We’re known as being fair even as the world becomes as polarized as at any time since the newspaper’s founding in 1908.

We have a mission beyond circulation, we want to bridge divides. We’re about kicking down the door of thought everywhere and saying, “You are bigger and more capable than you realize. And we can prove it.”

If you’re looking for bran muffin journalism, you can subscribe to the Monitor for $15. You’ll get the Monitor Weekly magazine, the Monitor Daily email, and unlimited access to CSMonitor.com.

QR Code to Machines that learn: The origin story of artificial intelligence
Read this article in
https://www.csmonitor.com/Books/Book-Reviews/2021/0323/Machines-that-learn-The-origin-story-of-artificial-intelligence
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe