Open AI: Effort to democratize artificial intelligence research?

Tesla chief Elon Musk and other Silicon Valley executives announced a $1 billion investment in Open AI, a non-profit company intended as a research lab, not a commercial venture.

Computer servers at a computer servers at a Google data center in Mayes County, Okla in an undated photo. Tesla CEO Elon Musk and other Silicon Valley executives announced a $1 billion investment in a research lab called Open AI on Friday that aims to make artificial intelligence research available to the public.

Connie Zhou/Google via AP/File

December 14, 2015

Debates on the future of artificial intelligence often boil down to questions about whether the technology could help humans — detecting patterns that could help solve crimes or driving autonomous car, for example — or become the stuff of dystopian nightmares that have long fueled science fiction.

With a $1 billion investment in a non-profit called Open AI, Tesla head Elon Musk and several other prominent tech executives are aiming for the former, while taking a swipe at the latter.

The new company will make its patents and research open to the public in a bid to increase transparency about AI’s potential rather than focusing on its commercial implications, say its backers — who include LinkedIn co-founder Reid Hoffman, venture capitalist Peter Thiel, the start-up incubator Y-Combinator, and Amazon Web Services.

Ukraine’s Pokrovsk was about to fall to Russia 2 months ago. It’s hanging on.

“Since our research is free from financial obligations, we can better focus on a positive human impact,” says the group in a statement. "We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible... It's hard to fathom how much human-level AI could benefit society, and it's equally hard to imagine how much it could damage society if built or used incorrectly."

Mr. Musk has long been a critic of the commercialization of artificial intelligence, describing it as “our biggest existential threat” in a 2014 interview with Massachusetts Institute of Technology students while suggesting it should possibly be regulated at the national level.

Often compared to the comic book billionaire Tony Stark for his investments in a range of “moonshot” technologies – from the rapid transporter Hyperloop to private space flight with Space X – Musk has said his involvement with artificial intelligence is intended to be as a watchdog rather than a traditional commercial venture.

Musk was an early investor in the artificial intelligence company DeepMind, which was acquired by Google in March, and joined Mark Zuckerberg and other prominent tech figures in investing in the machine learning firm Vicarious in 2014.

“My sort of ‘investment,’ in quotes, for DeepMind was just to get a better understanding of AI and to keep an eye on it, if you will,” he told the technology journalist Steven Levy in a Medium interview on Friday.

Howard University hoped to make history. Now it’s ready for a different role.

In July, he joined a variety of research scientists focused on artificial intelligence, including OpenAI's research director Ilya Sutskever and the linguist Noam Chomsky, to argue against the use of AI for advanced weaponry and military capabilities, comparing its potential impact to that of a nuclear weapon.

“If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow,” the researchers wrote.

The introduction of Open AI comes as tech giants such as Facebook and Google are releasing their own artificial intelligence software to the public. The companies say they want to open up their proprietary technology to raise awareness about the capabilities of what’s often called deep learning technology to translate languages, recognize and mimic human speech, and recognize images, for example.

But Musk and Y Combinator CEO Sam Altman, who together are co-chairing the new venture, argue that as AI advances, commercial companies will be less likely to share their innovations to preserve their commercial potential, while Open AI is intended as a research lab, not to make a profit.

“Security through secrecy on technology has just not worked very often,” Mr. Altman told Medium’s Mr. Levy. “If only one person gets to have it, how do you decide if that should be Google or the U.S. government or the Chinese government or ISIS or who?”

Open AI is still in its “embryonic stages” and could take several decades before it develops truly artificial intelligent technology, they say.

Large investments in artificial intelligence have not been immune to criticism, however. In January, Roman Ormandy, a technologist and entrepreneur who has worked on a wearable personal assistant, argued against the prioritization of AI over other research innovations. Significant focus on AI, along with media coverage predicting the coming development of true artificial intelligence within a decade or two, would take away from advancements in neural science, which involves studying the brain, Mr. Ormandy argued in a Wired column.

“I believe that neural science and biology utilizing wearable sensors is already much more fruitful than AI in delivering personal assistants guiding us through daily life, keeping us healthier and stress free, based on better understanding of [the] brain, rather than logic of CPU programing and algorithms of AI focused on weapons and robotics,” he wrote.

Noting that Silicon Valley giants have hired several deep learning pioneers – including University of Toronto deep learning specialist Geoff Hinton; New York University researcher Yan LeCunn, who now works for Facebook; and Andrew Ng of Stanford University, who partnered with Chinese search engine giant Baidu – Ormandy says an ongoing focus on AI could stifle other forms of research.

More than 50 years after former President Dwight Eisenhower famously warned of the dangers of “the acquisition of unwarranted influence” by the military industrial complex, researchers working with machine learning have long engaged in similar debates about the commercial potential of their research.

“We’re training people every day that have these skills, but somehow the connection isn’t there,” Cynthia Rudin, a machine learning researcher at MIT who has worked on software to help police departments solve crimes by identifying patterns in burglaries, said in an interview in November, noting the rise of software released by commercial companies such as Microsoft and PredPol that claim to offer departments predictive policing capabilities.

In the interview with Levy, Musk described Open AI as an effort to address those concerns.

“I think the best defense against the misuse of AI is to empower as many people as possible to have AI. If everyone has AI powers, then there’s not any one person or a small set of individuals who can have AI superpower,” he says.