Why ‘killer robots’ are becoming a real threat – and an ethics test

Dozens of CEOs from firms with a hand in artificial intelligence have issued a joint warning that autonomous weapons pose the risk of warfare that is cheaper, and can occur faster, than ever. 

The autonomous 'Sea Hunter,' developed by the Pentagon’s Defense Advanced Research Project Agency, is shown docked in Portland, Ore., after its christening ceremony in April 2016. Currently oriented toward detecting mines and submarines, the drone is expected to be outfitted with weapons at some point.

Steve Dipaola/Reuters/File

August 31, 2017

Nations are busy putting guns into the hands of robots.

Generals find that attractive for many reasons. Smart machines can take on the dull and dangerous work that soldiers now do, like surveillance and mine-removal, without getting bored or tired. In combat, they can reduce the costs of war, not only in terms of dollars but also in fewer human casualties.

But many governments and artificial intelligence (AI) researchers are worried. The threat at present is not that robots are so smart that they take over the world Hollywood-style. It’s that today’s robots won’t be smart enough to handle the new weapons and responsibilities they’re being given. And because of the rapid advances in AI, experts worry that the technology will soon cross a line where machines, rather than humans, decide when to take a human life.

In the race to attract students, historically Black colleges sprint out front

This was supposed to be the year when governments began to address such concerns. After three years of discussing putting limits on military robots, some 90 countries were expected in August to formalize the debate under the aegis of the United Nations. And in the United States, the Trump administration was due to update an expiring Obama-era directive on autonomous weapons.

Instead, the UN canceled the inaugural meeting set for this month because Brazil and a few other smaller countries had not paid their contributions to the UN Convention on Conventional Weapons. The lack of payment also imperils a scheduled November meeting as well. Meanwhile, due to an administrative change, the Pentagon has eliminated its deadline, leaving the current directive in place, despite criticisms that the language is too ambiguous.

The private sector has stepped into this vacuum, warning in an open letter to the UN on Aug. 21 that “lethal autonomous weapons threaten to become the third revolution in warfare [following firearms and nuclear weapons]. Once developed, they will permit armed conflict to be fought at a scale greater than ever, and at timescales faster than humans can comprehend.”

The letter, signed by 126 founders of robotics and artificial intelligence companies from 28 countries, asks the new UN group on autonomous weapons to “find a way to protect us all from these dangers.”

What’s under way

Killer robots – as opponents of the technology like to call them – are already being tested and deployed. For example:

Moody chickens? Playful bumblebees? Science decodes the rich inner lives of animals.

•On the southern edge of the Korean Demilitarized Zone, South Korea has deployed the Super aEgis II, a sentry gun that can detect, target, and destroy enemy threats. It was developed so it could operate on its own, although so far the robots reportedly can’t fire without human intervention.

•Britain’s Taranis, an experimental prototype for future stealth drones, has an autonomous mode where it flies and carries out missions on its own, including searching for targets.

•This summer, the US Office of Naval Research has been testing the Sea Hunter, the Navy’s next-generation submarine drone that can operate autonomously or by remote control. Currently oriented toward detecting mines and ultraquiet diesel-electric submarines, the drone is expected to be outfitted with weapons at some point.  

“Research of autonomous systems is continuing to evolve and expand,” Roger Cabiness, a Defense Department spokesman, writes in an email. The department “is committed to complying with existing law of war requirements. [And] the use of autonomy in weapon systems can enhance the way law of war principles are implemented in military operations.  For example, commanders can use precision-guided weapon systems with homing functions to reduce the risk of civilian casualties.”

But as the technology evolves, so do the ethical questions. When the Air Force uses remotely piloted drones to target people, such as terrorists, it specifically ensures that military personnel decide whether to fire. On the other hand, the US Navy since the 1980s has been using the Phalanx Close-In Weapons System, which tracks, targets, and shoots down incoming antiship missiles without human intervention. The missiles move too fast for humans to make the right moves quickly enough.

In 2012, the Defense Department issued Directive 3000.09, which says “autonomous and semi-autonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force.” Of course, what the Pentagon considers “appropriate levels of human judgment” can vary from general to general.

“There will be a raucous debate in the department about whether or not we take humans out of the decision to take lethal action,” Gen. Paul Selva, vice chairman of the Joint Chiefs of Staff, told a Senate committee at his confirmation hearings in July. He said he didn’t think it reasonable for robots to decide whether to take a human life.

More ethical than humans?

Many ethicists and artificial intelligence developers want to ensure people are kept in the loop when lethal force is applied. At the moment, that’s a given, at least from those nations adhering to the law of war. Robots struggle to differentiate between soldiers and civilians in complex battle settings.

But the day may come, some say, when robots are able to be more ethical than human troops, because their judgment wouldn’t be clouded by emotions such vengefulness or self-preservation, which can shape human judgment.

“Unfortunately, humanity has a rather dismal record in ethical behavior in the battlefield,” Ronald Arkin, director of the Mobile Robot Laboratory at the Georgia Institute of Technology, wrote in a guest blog for the IEEE, a technical professional organization. “Such systems might be capable of reducing civilian casualties and property damage when compared to the performance of human warfighters.”

It’s entirely possible they would be able to do a better job than humans in the future, but the ethical challenge doesn’t go away, counters Toby Walsh, an artificial intelligence researcher at the University of New South Wales in Sydney who helped spearhead this month’s open letter to the UN. It simply changes the question.

‘They will lower the barriers to war.’

“When [robots] are much more capable, they will be weapons of mass destruction,” he says. “They will lower the barriers to war. You won’t need 1,000 people to wage war. You will just need one.”

That’s a key question surrounding the new technology. By lowering the costs of war, robots may make it easier for nations to start wars.

“Anything that makes the threshold for going to war lower than it was in cost and blood and treasure can act as an incentive to move from diplomacy to war,” says Col. James Cook, a military ethicist at the US Air Force Academy. “It’s human nature to try to get away with a cheap victory.” (He stresses that he’s not a spokesman for the military.)

Sometimes, the advanced technology might start a conflict accidentally, warns Ryan Gariepy, chief technology officer of Clearpath Robotics in Kitchener, Ontario. Clearpath was the first robot company to publicly oppose killer robots, in 2014. One challenge of robot sentries, for example, is that one could malfunction and fire, potentially setting off a “flash war.”

Such mistakes could create sticky legal entanglements, points out Arend Hintze, an artificial intelligence researcher at Michigan State University. If a military robot makes a mistake, who’s liable: the military, the hardware maker, the software designer?

The questions of accountability become trickier as the technology becomes more complex. In January, three US jet fighters dropped 103 tiny aircraft (known as Perdix) in California to demonstrate the capabilities of microdrone “swarms.” Instead of directing each drone, the Perdix operator gives an order to the group, such as observing a military facility, and the drones figure out how to do it operationally.

Perdix is only intended for surveillance for now. But the technology hints at the potential for such swarms to be used in combat down the road.