Google's AI could probably beat you at Atari
A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games.
Google DeepMind
Computers have already beaten humans at chess and "Jeopardy!," and now they can add one more feather to their caps: the ability to best humans in several classic arcade games.
A team of scientists at Google created an artificially intelligent computer program that can teach itself to play Atari 2600 video games, using only minimal background information to learn how to play.
By mimicking some principles of the human brain, the program is able to play at the same level as a professional human gamer, or better, on most of the games, researchers reported today (Feb. 25) in the journal Nature. [Super-Intelligent Machines: 7 Robotic Futures]
This is the first time anyone has built an artificial intelligence (AI) system that can learn to excel at a wide range of tasks, study co-author Demis Hassabis, an AI researcher at Google DeepMind in London, said at a news conference yesterday.
Future versions of this AI program could be used in more general decision-making applications, from driverless cars to weather prediction, Hassabis said.
Learning by reinforcement
Humans and other animals learn by reinforcement — engaging in behaviors that maximize some reward. For example, pleasurable experiences cause the brain to release the chemical neurotransmitter dopamine. But in order to learn in a complex world, the brain has to interpret input from the senses and use these signals to generalize past experiences and apply them to new situations.
When IBM's Deep Blue computer defeated chess grandmaster Garry Kasparov in 1997, and the artificially intelligent Watson computer won the quiz show "Jeopardy!" in 2011, these were considered impressive technical feats, but they were mostly preprogrammed abilities, Hassabis said. In contrast, the new DeepMind AI is capable of learning on its own, using reinforcement.
To develop the new AI program, Hassabis and his colleagues created an artificial neural network based on "deep learning," a machine-learning algorithm that builds progressively more abstract representations of raw data. (Google famously used deep learning to train a network of computers to recognize cats based on millions of YouTube videos, but this type of algorithm is actually involved in many Google products, from search to translation.)
The new AI program is called the "deep Q-network," or DQN, and it runs on a regular desktop computer.
Playing games
The researchers tested DQN on 49 classic Atari 2600 games, such as "Pong" and "Space Invaders." The only pieces of information about the game that the program received were the pixels on the screen and the game score. [See video of Google AI playing video games]
"The system learns to play by essentially pressing keys randomly" in order to achieve a high score, study co-author Volodymyr Mnih, also a research scientist at Google DeepMind, said at the news conference.
After a couple weeks of training, DQN performed as well as professional human gamers on many of the games, which ranged from side-scrolling shooters to 3D car-racing games, the researchers said. The AI program scored 75 percent of the human score on more than half of the games, they added.
Sometimes, DQN discovered game strategies that the researchers hadn't even thought of — for example, in the game "Seaquest," the player controls a submarine and must avoid, collect or destroy objects at different depths. The AI program discovered it could stay alive by simply keeping the submarine just below the surface, the researchers said.
More complex tasks
DQN also made use of another feature of human brains: the ability to remember past experiences and replay them in order to guide actions (a process that occurs in a seahorse-shaped brain region called the hippocampus). Similarly, DQN stored "memories" from its experiences, and fed these back into its decision-making process during gameplay.
But human brains don't remember all experiences the same way. They're biased to remember more emotionally charged events, which are likely to be more important. Future versions of DQN should incorporate this kind of biased memory, the researchers said.
Now that their program has mastered Atari games, the scientists are starting to test it on more complex games from the '90s, such as 3D racing games. "Ultimately, if this algorithm can race a car in racing games, with a few extra tweaks, it should be able to drive a real car," Hassabis said.
In addition, future versions of the AI program might be able to do things such as plan a trip to Europe, booking all the flights and hotels. But "we're most excited about using AI to help us do science," Hassabis said.
Follow Tanya Lewis on Twitter. Follow us @livescience, Facebook &Google+. Original article on Live Science.