Musk, Hawking, Chomsky: Why they want a ban on killer robots.

Leading researchers in robotics and artificial intelligence signed an open letter, published Monday, calling for a preemptive ban on autonomous offensive weapons.

The Navy's unmanned X-47B aircraft receives fuel from an Omega K-707 tanker plane (not shown) while operating in the Atlantic Test Ranges over the Chesapeake Bay, Maryland, in April.

Wolter/US Navy/Reuters

July 27, 2015

A global arms race for killer robots? Bad idea.

That’s according to more than 1,000 leading artificial intelligence (AI) and robotics researchers, who have together signed an open letter, published Monday, from the nonprofit Future of Life Institute.

The letter calls for a ban on autonomous offensive weapons as a means of preventing just such a disaster, and represents the latest word on the global conversation around the risks and benefits of AI weaponry.

The ethics of killer robots

Proponents of robotic weapons, such as the Pentagon, say that such technology could increase drone precision, keep troops out of harm’s way, and reduce emotional and irrational decisionmaking on the battlefield, The Christian Science Monitor’s Pete Spotts reported last month.

Critics, however, warn that taking humans out of the equation could lead to human rights violations as well as trouble around international laws governing combat, Mr. Spotts wrote.

The current letter is inclined towards the latter:

If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce.... Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group.

“We therefore believe that a military AI arms race would not be beneficial for humanity,” the letter goes on to say.

Among the signatories are renowned physicist Stephen Hawking, Tesla Motors Chief Executive Officer Elon Musk, cognitive scientist Noam Chomsky, and Apple co-founder Steve Wozniak, as well as top AI and robotics experts from the Massachusetts Institute of Technology, Harvard University, Microsoft, and Google.

They took up arms to fight Russia. They’ve taken up pens to express themselves.

Dr. Hawking in particular summoned images of the Terminator wreaking havoc on humans when he told the BBC in a 2014 interview, “The development of full artificial intelligence could spell the end of the human race. It would take off on its own, and re-design itself at an ever increasing rate. Humans, who are limited by slow biological evolution, couldn't compete, and would be superseded.”

Others are less dire in their pronouncements.

“We’re not anti-robotics and not even anti-autonomy,” Stephen Goose, one of the signatories and director of arms-control activities at Human Rights Watch, told the Monitor. “We just say that you have to draw a line when you no longer have meaningful human control over the key combat decisions of targeting and attacking.”

The problem is what is meant by “meaningful human control” – an idea that is “intuitively appealing even if the concept is not precisely defined,” according to the United Nations Institute for Disarmament Research.

To further complicate the issue, others point out that a preemptive ban, such as that advocated by the open letter, could close the door to potential for developing AI technology that could save lives.

“It sounds counterintuitive, but technology clearly can do better than human beings in many cases,” Ronald Arkin, an associate dean at the Georgia Institute of Technology in Atlanta whose research focuses on robotics and interactive computing, told the Monitor. “If we are willing to turn over some of our decisionmaking to these machines, as we have been in the past, we may actually get better outcomes."

One thing most experts do agree on is that further debate is critical to determining the future of AI in warfare.

“Further discussion and dialogue is needed on autonomy and human control in weapon systems to better understand these issues and what principles should guide the development of future weapon systems that might incorporate increased autonomy,” wrote Michael Horowitz and Paul Scharre, both from the Center for a New American Security.