The ethics of killer robots

The Pentagon and other countries are developing robotic weapons that can choose when to attack without human oversight. But a push to regulate or ban development of such weapons is mounting.

The Navy's unmanned X-47B aircraft receives fuel from an Omega K-707 tanker plane (not shown) while operating in the Atlantic Test Ranges over the Chesapeake Bay, Maryland, in April.

Wolter/US Navy/Reuters

June 17, 2015

Within a few decades, perhaps sooner, robotic weapons will likely be able to pick and attack targets – including humans – with no human controller needed.

To a Pentagon actively conducting research in this area, the technology can increase the precision of drones and help keep more troops out of harm’s way. Others proponents add that it could reduce emotional and irrational human decisionmaking on the battlefield, which can lead to atrocities large and small. Several countries are pursuing so-called lethal autonomous weapons systems.

But critics worry that taking humans out of the loop for life-or-death combat decisions is unethical and would lead to violations of human-rights laws as well as international laws governing combat. They are pushing for an outright ban on the development, deployment, and use of such weapons.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

The critics include human-rights groups, the International Red Cross, more than 20 winners of the Nobel Peace prize, as well as scientists involved in robotics and artificial intelligence work.  To them, the human element – while at times flawed – remains essential in the oversight of lethal machines. 

“We’re not anti-robotics and not even anti-autonomy,” says Steven Goose, who oversees arms-control activities at Human Rights Watch, which is coordinating the efforts of roughly 50 non-government organizations working for the ban. “We just say that you have to draw a line when you no longer have meaningful human control over the key combat decisions of targeting and attacking.... Once you reach a stage where a platform is weaponized, you have to maintain meaningful human control.”

Such a ban, however, would create a different kind of danger, say supporters of the research. It would cut off the possibility that the technology could save lives. Autonomous weapons may lack empathy. But they are not motivated by self-preservation, anger, revenge, hunger, fatigue, or resentment. With more-extensive and more-capable sensors, they could make faster decisions and respond to changing conditions more quickly that could humans.

It makes sense to have a moratorium on deploying such weapons “until we can show that we have exceeded human-level performance from an ethical perspective,” says Ronald Arkin, an associate dean at the Georgia Institute of Technology in Atlanta whose research focuses on robotics and interactive computing.

But a preemptive ban risks “throwing out a potentially helpful solution to this chronic problem of noncombatant casualties. If we can't find ways to reduce noncombatant casualties through the use of technology, I would be rather surprised,” he says.

Howard University hoped to make history. Now it’s ready for a different role.

Wolf packs of autonomous drones

Weapons have been on a track toward becoming more autonomous for some time, notes Thomas Karako, a senior fellow with the international security program at the Center for Strategic and International Studies in Washington.

For instance, anti-ship missiles need to take swift, intense evasive maneuvers to outwit missile-defense systems, he notes. These are capabilities “you want to program in advance, rather than using a joy stick,” he says.

The United States Navy, meanwhile, has been working on the X-47B – a stealth jet fighter-like autonomous drone that in late April demonstrated the first fully automated rendezvous with an aerial tanker. Three years ago, the X-47B became the first craft to autonomously launch from and land on an aircraft carrier.

By some accounts, fully autonomous weapons systems are still decades away.

Two research projects by the US Defense Advanced Research Projects Agency (DARPA) offer a peek into that future.

One, dubbed CODE, aims to develop software that would enable drones to work together, protecting each other, and directing other, less-capable systems. 

“Just as wolves hunt in coordinated packs with minimal communication, multiple CODE-enabled unmanned aircraft would collaborate to find, track, identify and engage targets, all under the command of a single human mission supervisor,” said Jean-Charles Ledeé, DARPA program manager, in a statement that accompanied an announcement of the program.

Another project, called FLA, is focusing on developing navigation software that is geared to small drones operating in unfamiliar urban areas or over uneven, complex terrain. The goal is to allow them to flit along at up to 45 miles an hour, finding their way without help from outside navigation systems and making any course changes needed abruptly, mimicking the flight of birds or insects.

Some advocates for an international ban suggest the technology may already be at hand.

Even ban proponents “say, ‘Well, these weapons won't really exist for 20 or 30 years,’ ” says Stuart Russell, a computer scientist at the University of California at Berkeley. “Then you have the UK Ministry of Defence saying that this is entirely feasible right now. I think that's much closer to the truth.”

For its part, the United States has internal directives circumscribing the use of autonomous and semi-autonomous weapons. [Sentence removed] Among the provisions: humans must have the final say on the use of force; autonomous weapons must not target humans; and they can only attack targets that humans have specified. However, the directive has a “pull date” of Nov. 21, 2022, unless a new administration modifies or renews it.

“The United States has actually been very proactive in terms of trying to manage this on their own,” Georgia Tech's Dr. Arkin says.

Humanity in drone warfare?

A willingness to see if robots can outperform humans from an ethical or human rights standpoint stems in no small part from back-to-back reports from the United States Army Medical Command’s Office of the Surgeon General. The 2007 and 2008 reports deal with the mental health and well-being of soldiers and marines who served in Iraq.

The reports don't document atrocities or excessive collateral damage directly, but they hint at the potential for some troops to mistreat noncombatants.

While a clear majority of soldiers and marines surveyed appeared to operate within ethical standards, roughly one-third acknowledged insulting or cursing noncombatants in their presence at least once. In the 2007 report, 10.9 percent acknowledged damaging or destroying private property unnecessarily – rising to 13.6 percent in the 2008 update. Those who hit or kicked a noncombatant unnecessarily increased from 5.3 to 6.1 percent.

In 2012, Arkin published a proposed software architecture for introducing ethics into autonomous weapons systems. The hope was that other researchers would want to collaborate in developing the algorithms needed to keep war-fighting bots within the bounds of the laws of war and international humanitarian law. A handful of groups have joined the effort since then, he says, including research teams in New Zealand, Britain, France, and at the US Air Force Research Laboratory and the US Naval Postgraduate School.

“It sounds counterintuitive, but technology clearly can do better than human beings in many cases,” says Arkin. “If we are willing to turn over some of our decisionmaking to these machines, as we have been in the past, we may actually get better outcomes."

Proponents of a ban have their doubts. Many of the rules involved in military tactics have subjective, intuitive elements that would be tough for lethal autonomous weapons to cope with, they argue. Among them: proportionality (harm to civilians versus military gains), necessity, or an ability to distinguish friend or noncombatant from foe.

Threat of a new arms race

Such issues are important, notes Dr. Russell of Berkeley, but they miss a larger point: Lethal autonomous weapons could present a fresh set of weapons-proliferation issues.

“What happens if these weapons are developed further and we have an arms race? What does the end point of that arms race look like?” asks Professor Russell of the University of California.

When Russell first started to think about lethal autonomous weapons, “my gut reaction was that perhaps AI [artificial intelligence] could do a better job. Maybe we could produce weapons that are much more targeted and much less likely to destroy civilian life and property.”

But the prospect of an arms race changed his mind, he says, as well as the prospect that repressive governments could turn lethal autonomous weapons inward.

“It’s one thing to think about the UN or US using them against Boko Haram or ISIS, people that everyone agrees are bad,” he says. “It’s another thing to think about [Syrian President Bashar al-] Assad using them to put down another rebellion in one of his cities. Any repressive government would love to have these kinds of tools.”

It's unclear that a ban would prevent that. History is replete with examples of countries willing to avoid or violate such pacts if they feel their action is in their national interest.

An outright ban also may not be feasible because the technologies have nonmilitary applications as well as military ones, suggests Markus Wagner, an associate professor at the University of Miami Law School in Coral Gables, Fla.

Yet there is an urgent need to set standards for the deployment of such lethal autonomous weapons, he wrote in an article published last December in the Vanderbilt Journal of Transnational Law.

Arkin and others suggest that concerns about lethal autonomous weapons could be met through existing legal mechanisms and so don't require a ban.

Last year, the 121 countries that are party to the Convention on Certain Conventional Weapons began discussing the weapons systems. A session in November will set the convention's agenda for 2016.

One challenge has been agreeing on definitions of “lethal autonomous weapon system” and “meaningful human control.”

But based on discussions during a five-day meeting in Geneva in April, it's clear that the convention will set lethal autonomous weapons as its main work item for 2016, Mr. Goose of Human Rights Watch says.