Startup pairs man with machine to crack the 'black box' of neural networks

New startup Gamalon has come out with a machine learning approach it claims is a hundred times more efficient than what Google has to offer. 

South Korean professional Go player Lee Sedol reviews the final match against Google's artificial intelligence program, AlphaGo, in South Korea last year. A new startup claims to have developed a more powerful machine learning strategy.

Lee Jin-man/AP/File

February 16, 2017

What if programs could take common sense suggestions from people, instead of relying blindly on data? What if machines could act more like humans, jumping to conclusions after just a few observations? These are two of the questions being asked by ambitious new machine learning startup Gamalon.

Deep learning has been the darling of the artificial intelligence (AI) community for years, equipping machines with the ability to beat humans at video games and recognize everything from cats to dumbbells, but the approach requires vast computational resources and ends with the production of a "black box" function, inscrutable to humans.

Cambridge, Mass.-based startup Gamalon, which came out of stealth mode on Tuesday, aims to harness the power of probability to improve on these weaknesses. Their new approach, which they call “Bayesian Program Synthesis” (BPS), uses the “Bayesian” branch of statistics to automatically combine and modify simple program pieces, incorporating human input, to carry out more complex tasks.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

Early results are promising. While standard image recognition systems need to practice on thousands of sample pictures capturing the range of possible lightings, colors, shapes, and sizes, the new algorithm can get started after just a few examples and refine its thinking on the fly. In at least some cases, Gamalon claims a 100-fold efficiency improvement over Google’s open source TensorFlow platform, which powers a number of familiar products including Gmail, photos, and speech recognition.

We humans use Bayesian reasoning all the time. For example, say you want to know the chance of rain tomorrow. One approach would be to look up the average annual number of rainy days in your area, and divide by 365 days in a year. There’s nothing inherently wrong with this data-heavy strategy, but it might not give great results during a New England winter, where common sense suggests a bunch of sub-zero days with zero rain, but plenty of snow. Roughly speaking, Bayesian probability provides a framework that lets us apply this kind of prior knowledge to a data set for more accurate predictions.

There’s nothing new about using this branch of statistics in machine learning, as it's already baked into every deep learning network. "Mathematically, every neural network is a Bayesian program," Gamalon chief executive Ben Vigoda points out. "But not all Bayesian programs are neural networks." His team's innovation is the creation of a system that can automatically synthesize and transform simple problems to solve more difficult ones.

As Dr. Vigoda explains it, the programs act like little scientists.

“This is essentially the scientific method. Hypothesize, test via experiment, iterate. We do this using a Bayesian framework for considering the space of hypotheses in the light of the evidence,” he tells The Christian Science Monitor in an email – in other words, using statistics to narrow down the number of possibilities. 

Howard University hoped to make history. Now it’s ready for a different role.

The self-correcting, repetitive process resembles classical deep learning in some ways, but has a wildly different result. Modern neural networks such as TensorFlow contain billions of “neurons,” which are really just decimal numbers, whose relationships to each other are constantly refined as the system strives for ever greater accuracy. 

During training, these values get adjusted up and down to give better predictions, but in the end a string of a billion numbers is impossible to read and analyze.

"Humans can't interpret these, we don't know what the system has learned," Vigoda explains.

Additionally, deep learning's messy, real-world training data can teach machines unintended lessons, as Google was surprised to learn when its image search once concluded that dumbbells come with arms attached, forming unsettling pairs.

BPS works differently from what Vigoda calls the “black art” of neural networks, which are inscrutable to humans. Rather than adjusting decimal numbers, it modifies code in its simulation in order to try to match its predictions with what a human would say. “After training, if we go into the system to see what it has learned... We can see exactly what the system is thinking,” he says.

This open dialogue between man and machine lets developers quickly update the code with new rules of thumb, just as the knowledge that you’re in New England lets you update your mental weather forecast.

Vigoda compares BPS to Google's 2012 demonstration of cat recognition, when it trained AI to identify YouTube's favorite creatures based on videos: “Say I know a rule that a cat cannot have a horn on its head. There’s no such thing as a ‘uni-cat’. How do I tell TensorFlow to obey that rule? I can’t go in and adjust some of the billions of synapse numbers.... In our BPS system, it is much much easier for the computer or the human to add additional assumptions to a model, and then test them on the data.”

But if you come at the king, you best not miss. Can newcomer Gamalon make good on its bold claims targeting machine learning giant Google?

New York University machine learning researcher Brenden Lake calls the approach “very interesting, but difficult to assess without knowing the details.” Dr. Lake, who is not involved in the startup, points out similarities to a previous program he helped develop that could recognize handwritten characters more accurately than humans could after just a few trials.

In general, Lake agrees with Vigoda’s description of the method’s flexibility. “It is also a promising way to add and utilize prior knowledge in machine learning, without having to learn everything from scratch using a very large data set,” he explains to The Monitor in an email.

But he doesn’t see deep learning going away anytime soon. Rather, he predicts the two can work together for even better results.

Vigoda agrees, calling it “not an either-or.” Instead, he calls BPS a generalization, a framework to lay on top of neural networks to improve their performance.

Nevertheless, the start-up founder is confident that his company’s new architecture is a big step forward in machine learning. 

“Experts who play with our system all come away feeling that in five years, [Bayesian Program Synthesis] and Bayesian programming (by a human) ... will be the unified engineering practice for machine learning practitioners and software developers who are building complex machine learning systems,” he says.

[Editor's note: This story has been updated to correct Dr. Lake's first name.]