War robots will lessen killing – not increase it

Stemming from fear that autonomous robots could embark on a campaign of indiscriminate killing, some have called for a global moratorium on 'lethal autonomous robotics.' In fact, there is a convincing base of evidence that robots are more likely to prevent slaughter than engage in it.

Bipedal humanoid robot 'Atlas,' primarily developed by the American robotics company Boston Dynamics, appears at a news conference at the University of Hong Kong Oct. 17.

Tyrone Siu/Reuters

October 18, 2013

Could armed autonomous robots embark on a campaign of indiscriminate killing? Fears such as this are behind the growing consternation surrounding the use of robotics in warfare.

In May, Christof Heyns, the UN special rapporteur on extrajudicial, summary, or arbitrary executions, implied that "autonomous" lethal robotics could result in “mechanical slaughter.” He called for a global moratorium to be put on the testing, production, and use of these technologies.

It would be difficult to find anyone arguing against the need for reflection in the decision to wage war, regardless of which tools humans use to fight their battles. However, there is a convincing base of evidence that robots are more likely to prevent slaughter than engage in it.

My muddle to mediocrity: When good enough is good enough

The notion that warfare should be up close and personal – mano a mano, so to speak – disappeared long ago, clearly well before the atomic bomb, and one might even argue before the development of gunpowder. Over time, the variety and availability of weapons has increased, making the option to conduct war at more impersonal and less discriminatory levels.

To some extent, robotics can help to reverse this trend. Specifically, recent research and analysis on ground robotics (some of it conducted at the RAND Corporation, where I am an engineer) has shown that advanced robotics technology has tremendous potential to save lives. And it can save not just the lives of soldiers, airmen, and marines but also the lives of noncombatants who would otherwise fall victim to collateral damage.

How does this work? Fundamentally, robotic platforms can provide an alternative to the large-scale weapons that might otherwise be used. From a tactical perspective, they can provide a physical buffer between friendly and enemy forces. From an analytical perspective, they buy time, a critical component in decisions to use larger-scale lethal force. With more time, greater care can be taken with the decision to use force, and using nonlethal force becomes a more realistic option.

The argument against lethal or “fighting” robots in particular is fundamentally one of semantics. Converting a nonlethal robotic system into a lethal one may be as simple as changing the ammunition or adding an explosive device to an otherwise unarmed system. This was done relatively quickly, turning an unarmed surveillance platform into a robot used to destroy improvised explosive devices (IEDs) in Afghanistan and Iraq, likely saving the lives of many US and coalition forces.

One could argue (correctly) that these systems were not autonomous. But should they be banned if they were? What if future systems could be used to efficiently sweep, neutralize, and ultimately rid the world of IEDs? It might be a working alternative to the well-intended but marginally effective ban on mines.

Can Syria heal? For many, Step 1 is learning the difficult truth.

The concept of “autonomous” robots is also a bit of a misnomer. There are many dimensions to this word in the context of a robotic operation, including the decision to use an autonomous robot in the first place. As a result, it really isn’t a fully autonomous robot if a human is making that decision, and that human can decide when to turn it off.

Regardless, robotics represents an entire research field – not just a specific system. And robotics is not just a military capability, either. There is a vibrant and growing commercial base as well. Thus, an argument for more testing (and more familiarity) may make greater sense in addressing questions and concerns about robotics than a moratorium.

The case for a sweeping ban on these autonomous lethal systems is more a reflection of fears about how they could be used rather than how they actually are used. Recent history has shown that current autonomous lethal robotic systems, many of which are already fielded, have been used with great care.

Given this and the pervasiveness of the technology, discussion and research should be focused on responsible leadership and developing sensible and enforceable policies for future systems, not enacting moratorium on technology that can ultimately save lives. War – whether waged with the assistance of robots or not – is always undesirable and should never be a first resort.

John Matsumura is a senior engineer at the nonprofit, nonpartisan RAND Corporation.