Tech leaders launch nonprofit to save the world from killer robots
Elon Musk, Sam Altman, and other tech titans have invested $1 billion in a nonprofit that would help direct artificial intelligence technology toward positive human impact.
Francois Mori/AP
Some of the top minds in tech today have banded together to prevent artificial intelligence (AI) from becoming a scourge for humanity – and instead optimize its potential for good.
With an initial investment of $1 billion from the leading names in technology, the new nonprofit Open AI launched Saturday with the ambition of ensuring that AI has a positive impact on society.
“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as whole,” the group wrote in a blog post introducing the new venture. “Because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.”
Proponents of AI technology, such as the Pentagon, say that robotic weapons with human-level intelligence have the potential to “increase the precision of drones, keep more troops out of harm’s way ... and reduce emotional and irrational human decisionmaking on the battlefield,” The Monitor’s Pete Spotts reported over the summer.
But tech leaders have for some time expressed concern over the dangers of letting loose such technology without oversight.
In an open letter published in July, more than 1,000 AI robotics researchers called for a ban on autonomous offensive weapons in an effort to highlight the dangers of AI in combat and prevent what they said was the inevitably violent arms race that would result:
If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.... Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.
“We therefore believe that a military AI arms race would not be beneficial for humanity,” the letter went on.
Open AI intends to combat that dystopian future – or others like it – by making new research publicly available and encouraging collaboration across institutions and companies. As a nonprofit, the group hopes to be able to prioritize philanthropy over self-interest.
“Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry,” writes author and longtime tech writer Steven Levy for Medium.
Ilya Sutskever, a former research scientist with the Google Brain team and one of the world’s top experts on machine learning, is Open AI’s research director; while Greg Brockman, formerly with the online payment company Stripe, is chief technology officer. Tesla’s Elon Musk – a vocal critic of the dangers of AI – and Y Combinator’s Sam Altman are the company’s co-chairs.
“If you think about how you use, say, applications on the Internet, you’ve got your email and you’ve got the social media and with apps on your phone – they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that,” Mr. Musk told Medium.