Google can build smart AI machines. But can it build artistic ones?

A new deep-learning project at Google aims to build AI systems that can study, remember, and eventually create music and other media.

The Google logo after being processed through an artificial neural network.

Michael Tyka

May 23, 2016

Google's artificial intelligence (AI) technology, already proven capable of contending with human problem-solving, may soon be able to think creatively.

Google AI continues to make strides in its development, this year being legally considered a driver of the tech giant's autonomous cars, beating people in geotagging and location recognition, and even learning to understand words and events taking place around it in the real world. Its practical applications are constantly expanding, but with its new Google Magenta project, the company hopes AI can become artistic as well.

"The question Magenta asks is, 'Can machines make music and art? If so, how? If not, why not?'" Google machine-learning research scientist Douglas Eck wrote in a blog post on the project. "The goal [of] Magenta is to produce open-source tools and models that help creative people be even more creative."

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

Magenta is based on Google's publicly available TensorFlow deep-learning engine, which uses algorithms to help computers find patterns in datasets that allow for a "learning" process similar to that of humans. Google AI and Magenta are a long way off from becoming unique, original artists, but the new program could set up the technology for future creations.

Magenta is designed to allow users to explore AI's ability, and Mr. Eck's first goal is to have the computer make and enhance music. By inputting audio files into the TensorFlow system and allowing it to be sonically trained, the technology could eventually create its own music.

Full AI compositions most likely will not materialize for years, but Eck's team has already developed a program to the point that it can hear a simple string of notes and build a longer melody from the original sounds.

That success is similar to the results of Google deep neural network visualizations, which resulted in the creation of interesting, if slightly off-putting, images fed through AI. That technology created the final outputs by associating features of an image over and over again with others it had already experienced, layering visuals with infusions from the network. Using neural networks for music could prove more challenging for AI learning, though, as ordered musical notes are inherently more complex than static imagery.

Eck's Magenta technology is still "very far from long narrative arcs" in music, as Quartz reports, although the systems could be able to develop tunes in certain situations relatively soon. Eck mentioned the potential for AI programs to recognize a person's state of mind through other devices and generate music that could adapt to users' moods.

Howard University hoped to make history. Now it’s ready for a different role.

After successfully moving past the musical phase, Eck says he plans for the Magenta group to attempt using AI performance to create video and image compositions as well. Magenta is set to officially begin work in June, and its systems will soon be open-sourced online for public use.