AI takes on its next challenge: StarCraft
Loading...
From Chess to Go, board games have been the first frontier of artificial intelligence research for decades. Now, the team at Google’s DeepMind wants to take AI to a whole new level in order to beat the online strategy game, StarCraft II.
DeepMind announced its decision to partner with StarCraft’s creator, Blizzard, at a conference in California. The two groups say that they look forward to programming a computer to react to strategic problems in real time.
“DeepMind is on a scientific mission to push the boundaries of AI, developing programs that can learn to solve any complex problem without needing to be told how,” wrote DeepMind in a blog post. “Games are the perfect environment in which to do this, allowing us to develop and test smarter, more flexible AI algorithms quickly and efficiently, and also providing instant feedback on how we’re doing through scores.”
In StarCraft, players begin the game on different sides of a virtual map, where they must discover information, build a team, and strategize to win virtual battles. The "fog of war" prevents players from seeing the whole playing area (unlike Go or chess), and this presents a new challenge for artificial intelligence.
“An agent that can play StarCraft will need to demonstrate effective use of memory, an ability to plan over a long time, and the capacity to adapt plans based on new information,” writes DeepMind.
Artificial intelligence has grown rapidly more sophisticated since its’ first major game breakthrough, IBM’s Deep Blue computer’s victory over chess world champion Gary Kasparov in 1997.
It took AI developers 40 years to beat a chess champion. For a long time, it was unclear whether or not computers could ever develop the same “smarts” that allow chess champions to win again and again. Prior to Deep Blue’s victory, The Christian Science Monitor’s Laurent Belsie reported that:
“Even though these machines are beginning to beat us at our own games, their "smarts" and mankind's intelligence are fundamentally different. Thus, the foreseeable future will not entail some apocalyptic vision of mankind versus machine. Overall, computers cannot match human wits. Instead, artificial intelligence will complement real intelligence.”
Computers don’t come with “built-in common sense,” wrote Belsie. In 1997, this was true. Deep Blue eventually beat Kasparov, not because the AI was a better long-term strategist, but because it was better at calculating the best move for any given position on the board.
What Google’s UK-based Deep Mind is trying to accomplish, however, is something far more sophisticated than Deep Blue’s “brute force” victory, or IBM’s Watson’s later Jeopardy trivia wins. Instead, Deep Mind wants its newest AI to remember, to strategize, and to learn.
Earlier this year, Deep Mind’s AlphaGo took artificial intelligence several steps farther, beating world champion Lee Se-dol at the complex strategy game, Go. Through analyzing thousands of games, AlphaGo learned to follow patterns and strategies, eventually beating Mr. Lee.
With StarCraft II, Deep Mind’s AI will have to go to a whole new level, managing a shifting game economy, gathering resources, and out-strategizing an able, human opponent. To be competitive, it will have to be able to be flexible and adaptable, ready for changed strategies at the drop of a hat.
With Blizzard, Deep Mind is hoping to develop such a program. And Blizzard stands to benefit, finding new ways to improve StarCraft itself.
“Is there a world where an AI can be more sophisticated, and maybe even tailored to the player?” said Blizzard’s Chris Sigaty, executive producer of StarCraft II, according to the Guardian. “Can we do coaching for an individual, based on how we teach the AI? There’s a lot of speculation on our side about what this will mean, but we’re sure it will help improve the game.”