In big win for AI, Google computer AlphaGo defeats legendary Go player
AlphaGo uses machine learning to develop strategies for countering the best human players; this kind of adaptability could allow artificial intelligence to make contributions in fields such as medicine and climate science.
News1/Korea Baduk/Reuters
After three and a half hours, top-ranked Go player Lee Se-dol had to tap out. After some overextensions early in the game, he had played an optimal sequence, but his opponent – a computer running a Google-developed program called AlphaGo – had made several moves so unexpected they took him aback.
“I was very surprised,” he told reporters after the game. “I never thought I would lose, [but] I didn’t know that AlphaGo would play the game in such a perfect manner.”
There’s hope for the humans yet: the match, played on March 9 in Seoul, South Korea, is only the first of a five-game series. Mr. Lee thinks he has a good chance of beating the computer in subsequent games by changing up his opening moves. But this first victory shows that artificial intelligence can mimic human reasoning and intuition, abilities once thought impossible.
When IBM’s Deep Blue supercomputer defeated world chess champion Garry Kasparov in 1997, it did so more or less through brute force. The computer could evaluate 200 million chess positions per second, mapping out the most likely path to checkmate by peering many moves into the future. Human players simply can’t compute chess positions that quickly or thoroughly. But a chessboard is eight squares by eight squares while a Go board is 19 squares by 19 squares, which means it’s simply not feasible for a computer to evaluate all possible moves the way it would in a game of chess or checkers. Instead, it must use intuition to learn from past matches and predict optimal moves.
Many researchers thought that artificial intelligence wouldn’t be able to develop those kinds of strategies until some time in the 2020’s. But AlphaGo relies on machine learning and Google’s "neural network" computers to be able to analyze millions of games of Go, including many it has played against itself.
Neural networks are capable of sifting through data and identifying patterns and relationships on their own, so they’re not limited by a hard-coded set of rules written by developers. Last year, Eric Schmidt, chairman of Google’s parent company Alphabet, said that neural networks and machine learning are on the cusp of revolutionizing scientists’ approaches to big problems in medicine, energy, and climate science.
AlphaGo trounced European Go champion Fan Hui last October, and Google’s DeepMind artificial intelligence team says the computer has gotten stronger since then. But Lee posed a greater challenge to AlphaGo. In addition to being a stronger player, Lee is known for making unorthodox moves early on in games to confuse opponents into making mistakes. AlphaGo countered those moves in the first match, putting Lee on the back foot and indicating that the computer is able to adapt to unexpected circumstances – but Lee might still bamboozle AlphaGo in subsequent matches. The series is being live-streamed with Korean and English commentary on DeepMind’s YouTube channel.