Google researchers build networks that invent their own encryption

Neural networks nicknamed 'Alice' and 'Bob' were taught to keep secrets from an adversarial network nicknamed 'Eve,' evolving their methods until their private messages remained secret.

People are silhouetted as they pose with laptops in front of a screen projected with a Google logo, in this picture illustration taken Oct. 29, 2014, in Zenica, Bosnia and Herzegovina.

Dado Ruvic/Reuters/File

October 30, 2016

Encryption software written by human programmers already protects sensitive data as it changes hands across a network, ensuring that only the intended recipient of any message can unlock it. But what if a network could write its own encryption software, inventing a security system to which no humans have a key?

Researchers with Google Brain, a "deep learning" initiative within the company best known for its search engine, published a paper last week documenting their ability to do just that.

By teaching two neural networks, nicknamed "Alice" and "Bob," to communicate with each other while keeping the contents of their messages secret from an adversarial third network, "Eve," the researchers effectively demonstrated that artificial intelligence (AI) can be unleashed as tireless tacticians in the never-ending struggle for data security. The approach, although still in its early stages, could revolutionize a broad array of scientific problem-solving.

Tracing fentanyl’s path into the US starts at this port. It doesn’t end there.

"Computing with neural nets on this scale has only become possible in the last few years, so we really are at the beginning of what's possible," Joe Sturonas of encryption company PKWARE in Milwaukee, Wis., told New Scientist.

Google chairman Eric Schmidt said in 2014 that AI research has been building steadily since its conception in 1955, and he predicted last year that AI will take off in the near future, paving the way to breakthroughs in genomics, energy, climate science, and other areas, as The Christian Science Monitor reported.

In the meantime, researchers are playing games. More specifically, they are building machines that learn by playing games.

Earlier this year, a computer running a program developed by Google outmaneuvered top-ranked human player Le Se-dol in the ancient board game Go. The triumph showed a significant progression beyond computerized chess, as the Monitor's correspondent Jeff Ward-Bailey reported in March:

When IBM’s Deep Blue supercomputer defeated world chess champion Garry Kasparov in 1997, it did so more or less through brute force. The computer could evaluate 200 million chess positions per second, mapping out the most likely path to checkmate by peering many moves into the future. Human players simply can’t compute chess positions that quickly or thoroughly. But a chessboard is eight squares by eight squares while a Go board is 19 squares by 19 squares, which means it’s simply not feasible for a computer to evaluate all possible moves the way it would in a game of chess or checkers. Instead, it must use intuition to learn from past matches and predict optimal moves. 

Many researchers thought that artificial intelligence wouldn’t be able to develop those kinds of strategies until some time in the 2020’s. But AlphaGo relies on machine learning and Google’s "neural network" computers to be able to analyze millions of games of Go, including many it has played against itself.

Instead of relying on rules provided by human developers, neural networks sift through large amounts of data looking for patterns and relationships to inform future computations. In the most recent case, Google researchers told Alice to send Bob a 16-digit encrypted message composed of ones and zeroes. The two began with a shared key, but their encryption methodology evolved. 

Eve effectively decrypted the first 7,000 messages, but she quickly faltered thereafter, thwarted by the constantly changing tactics employed by the other two.

"We demonstrate that the neural networks can learn how to perform forms of encryption and decryption, and also how to apply these operations selectively in order to meet confidentiality goals," researchers Martín Abadi and David G. Andersen wrote in their paper published last week.

"While it seems improbable that neural networks would become great at cryptanalysis, they may be quite effective in making sense of metadata and in traffic analysis," the researchers added.

John Biggs, writing for TechCrunch, noted that the researchers demonstrated how computers might be able to keep secrets not only from each other but from humans as well.

"This means robots will be able to talk to each other in ways that we – or other robots – won’t be able to crack. I, for one, welcome our robotic cryptographic overlords," he quipped.

But that very secrecy may highlight a limitation of the study for real-world uses, others noted.

"Because of the way the machine learning works, even the researchers don't know what kind of encryption method Alice devised, so it won't be very useful in any practical applications," Andrew Dalton wrote for Engadget. "In the end, it's an interesting exercise, but we don't have to worry about the machines talking behind our backs just yet."