Drone strikes that hit civilians: Time to rethink intelligence
Loading...
President Obama’s heavy reliance on drone-launched missiles to strike terrorists, mainly in Pakistan and Yemen, has come at a regrettable price. On Monday, Amnesty International and Human Rights Watch claimed that more than 87 civilians were killed in just the 10 drone attacks that they were able to research. Last week, a United Nations investigator estimated at least 400 civilians have been killed over the past decade by American drone strikes.
These reports cry out for more transparency about how the CIA and the military’s Joint Special Operations Command conduct the attacks, with any number of questions in need of answers.
How do drone operators, for example, know they can avoid civilian casualties with the “near certainty” required by Mr. Obama? How do they know if a suspected militant on the ground is a “continuing, imminent threat” to the United States? And is the unintended killing of bystanders creating more new militants than the attacks are eliminating?
Even with its success in preventing terrorist attacks on Americans, the US drone program remains shrouded in secrecy, especially with these new reports of civilian casualties.
Perhaps the most difficult question is this: As drone technology advances, is the US turning more of the decisionmaking in a strike over to the so-called intelligence of a semi-autonomous machine?
Many attacks on suspected Al Qaeda or Taliban figures are “signature strikes,” which rely on electronic surveillance that detects patterns of behavior rather than real-time human judgment. Are drones now also able to fire a missile on their own?
Militaries throughout history have sought to save soldiers’ lives and reduce costs with smarter killing machines. Last July, for example, the US Navy was able to land the first drone – the X-47B Salty Dog 502 – on an aircraft carrier without a human operator.
Many of these advances are designed simply to make the drone capable of striking a fast-moving target, not a stationary human. Yet the killing machines do err. In 1988, an Aegis sea defense system aboard a US Navy ship mistook a civilian Iranian airliner as a fighter jet and authorized a strike, killing all aboard.
The notion of robotic “brains” acting independently and in dangerous ways is no longer the stuff of science fiction. Advances in drone warfare demand a better understanding of what is intelligence and whether it can be delegated to machines, or even ascribed solely to the human brain.
The various unmanned military aircraft – Reapers, Predators, and Global Hawks – are achieving higher levels of autonomy, and yet the military is still searching for a definition of the intelligence put into them.
The most common definition is the OODA Loop, or the ability of a human or machine to observe, orient, decide, and act, keeping itself “in the loop” of necessary information to make choices. But even this formulation is limited.
The military is not alone in wondering about the nature and source of intelligence, and whether it can be relegated to objects. The same unease is felt about driverless cars and their ability to handle every traffic situation.
The problem lies not in software but in the theories about intelligence that humans bring to machines – and themselves. We now trust the “intelligence” of an elevator to lift us up and down buildings. But should we also trust a drone and its missile to know exactly when and where to kill?
Drone technology has long been remotely controlled, or automated, rather than strictly autonomous. Yet the lines are quickly becoming blurred. To prevent civilian casualties, the military must not only keep humans “in the loop” of decisionmaking, they must be more aware of how operators know they have located a threat and one that is not near civilians.
Such knowledge is not only one of the factors, but requires ethical thinking, adaptability, and critical reasoning, as well as empathy for those who might become collateral damage in a strike. These traits of intelligence don’t lie in the increasingly complex algorithms of computers.