Google aims to make smartphones that learn, understand the world around them

Google is partnering with Movidius, a smartphone chip designer, to put machine learning technology in mobile devices. This tech could eventually allow smartphones to understand images, speech, and written words – and to solve problems on their own.

Google and chip designer Movidius will work together to put machine learning technology in mobile devices. Here, a football fan uses a smartphone at Levi's Stadium in Santa Clara, Calif., on September 14, 2014.

Noah Berger/AP/File

January 29, 2016

Last October, Google CEO Sundar Pichai spoke about what machine learning means for Google. He called it “transformative,” and said that Google was “rethinking everything we’re doing” in the context of machine learning.

A subset of artificial intelligence, machine learning is an approach in which “neural network” computers use algorithms to sort through large sets of data, discovering patterns and relationships on their own rather than following predefined rules to solve problems. 

Google already uses machine learning to fight spam in Gmail and to allow its services to better understand spoken words. And this week the company announced it’s working to bring machine learning to the device people spend the most time interacting with: the smartphone.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

Google will partner with chip designer Movidius to put neural network technology in mobile devices, the companies announced in a press release, making progress toward handsets that can understand speech and images the way a person would.

Eventually, the companies say, smartphones will be able to recognize faces and objects, understand speech, and read signs and menus as easily as humans can. Rather than following a predefined set of instructions, phones will be able to make their own decisions about how to solve problems such as identifying a landmark or translating between languages.

“Future products can have the ability to understand images and audio with incredible speed and accuracy,” Google and Movidius wrote in the press release.

How would it work? Right now, most machine learning takes place in huge data centers where neural networks chew through large numbers of images, videos, and articles, performing mathematical operations on the data in order to classify and understand them. Given enough files of spoken words, for example, a neural network can learn to understand different accents and inflections.

When a person uses his or her smartphone to, say, translate a sentence from one language to another, the phone uploads the data to a server, where neural networks operate on it and return a result. But this takes time and relies on a wireless connection that may be shaky. Under the new partnership, Movidius will supply Google with microprocessors that can perform neural network computations locally, allowing smartphones to understand input without having to “phone home.”

Howard University hoped to make history. Now it’s ready for a different role.

The companies didn’t say when this technology might hit the market, or whether future Google smartphones will run on Movidius chips. But as machine learning software and neural network hardware progresses, handsets will come to have all the tools they need to make sense of the world around them – and, hopefully, to learn and adapt to users’ day-to-day needs.