Tech leaders launch nonprofit to save the world from killer robots

|
Francois Mori/AP
Tesla Motors Inc. CEO Elon Musk delivers a conference at the Paris Pantheon Sorbonne University as part of the COP21, United Nations Climate Change Conference, in Paris, Dec. 2. Mr. Musk is one of a slew of tech titans behind the new nonprofit Open AI, a project that aims to ensure artificial intelligence has a positive impact on human society.

Some of the top minds in tech today have banded together to prevent artificial intelligence (AI) from becoming a scourge for humanity – and instead optimize its potential for good.

With an initial investment of $1 billion from the leading names in technology, the new nonprofit Open AI launched Saturday with the ambition of ensuring that AI has a positive impact on society.

“Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as whole,” the group wrote in a blog post introducing the new venture. “Because of AI's surprising history, it's hard to predict when human-level AI might come within reach. When it does, it'll be important to have a leading research institution which can prioritize a good outcome for all over its own self-interest.”

Proponents of AI technology, such as the Pentagon, say that robotic weapons with human-level intelligence have the potential to “increase the precision of drones, keep more troops out of harm’s way ... and reduce emotional and irrational human decisionmaking on the battlefield,” The Monitor’s Pete Spotts reported over the summer.

But tech leaders have for some time expressed concern over the dangers of letting loose such technology without oversight.

In an open letter published in July, more than 1,000 AI robotics researchers called for a ban on autonomous offensive weapons in an effort to highlight the dangers of AI in combat and prevent what they said was the inevitably violent arms race that would result:

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.... Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

“We therefore believe that a military AI arms race would not be beneficial for humanity,” the letter went on.

Open AI intends to combat that dystopian future – or others like it – by making new research publicly available and encouraging collaboration across institutions and companies. As a nonprofit, the group hopes to be able to prioritize philanthropy over self-interest.

“Essentially, OpenAI is a research lab meant to counteract large corporations who may gain too much power by owning super-intelligence systems devoted to profits, as well as governments which may use AI to gain power and even oppress their citizenry,” writes author and longtime tech writer Steven Levy for Medium.

Ilya Sutskever, a former research scientist with the Google Brain team and one of the world’s top experts on machine learning, is Open AI’s research director; while Greg Brockman, formerly with the online payment company Stripe, is chief technology officer. Tesla’s Elon Musk – a vocal critic of the dangers of AI – and Y Combinator’s Sam Altman are the company’s co-chairs.

“If you think about how you use, say, applications on the Internet, you’ve got your email and you’ve got the social media and with apps on your phone  –  they effectively make you superhuman and you don’t think of them as being other, you think of them as being an extension of yourself. So to the degree that we can guide AI in that direction, we want to do that,” Mr. Musk told Medium.

You've read 3 of 3 free articles. Subscribe to continue.
Real news can be honest, hopeful, credible, constructive.
What is the Monitor difference? Tackling the tough headlines – with humanity. Listening to sources – with respect. Seeing the story that others are missing by reporting what so often gets overlooked: the values that connect us. That’s Monitor reporting – news that changes how you see the world.
QR Code to Tech leaders launch nonprofit to save the world from killer robots
Read this article in
https://www.csmonitor.com/Science/2015/1214/Tech-leaders-launch-nonprofit-to-save-the-world-from-killer-robots
QR Code to Subscription page
Start your subscription today
https://www.csmonitor.com/subscribe
CSM logo

Why is Christian Science in our name?

Our name is about honesty. The Monitor is owned by The Christian Science Church, and we’ve always been transparent about that.

The Church publishes the Monitor because it sees good journalism as vital to progress in the world. Since 1908, we’ve aimed “to injure no man, but to bless all mankind,” as our founder, Mary Baker Eddy, put it.

Here, you’ll find award-winning journalism not driven by commercial influences – a news organization that takes seriously its mission to uplift the world by seeking solutions and finding reasons for credible hope.

Explore values journalism About us