As AI joins battlefield, Pentagon seeks ethicist

The Pentagon in Washington announced it is seeking to hire an AI ethicist. As artificial intelligence and machine learning permeate military affairs, these technologies are beginning to play a more direct role in taking lives.

Charles Dharapak/AP/File

October 28, 2019

When the chief of the Pentagon’s new Joint Artificial Intelligence Center briefed reporters recently, he made a point of emphasizing the imminently practical – even potentially boring – applications of machine learning to the business of war.  

There’s the “predictive maintenance” that AI can bring to Black Hawk helicopters, for example, and “intelligent business automation” likely to lead to exciting boosts in “efficiencies for back office functions,” Lt. Gen. Jack Shanahan said. There are humanitarian pluses, too: AI will help the Defense Department better manage disaster relief.

But for 2020, the JAIC’s “biggest project,” General Shanahan announced, will be what the center has dubbed “AI for maneuver and fires.” In lulling U.S. military parlance, that includes targeting America’s enemies with “accelerated sensor-to-shooter timelines” and “autonomous and swarming systems” of drones – reminders that war does, after all, often involve killing people.

Why We Wrote This

Artificial intelligence is making inroads in the U.S. military, transforming everything from helicopter maintenance to logistics to recruiting. But what happens when AI gets involved in war's grimmest task: taking lives?

When he was asked halfway through the press conference whether there should be “some sort of limitation” on the application of AI for military purposes, General Shanahan perhaps recognized that this was a fitting occasion to mention that the JAIC will also be hiring an AI ethicist to join its team. “We’re thinking deeply about the safe and lawful use of AI,” he said.

As artificial intelligence and machine learning permeate military affairs, these technologies are beginning to play a more direct role in taking lives. The Pentagon’s decision to hire an AI ethicist reflects an acknowledgment that bringing intelligent machines onto the battlefield will raise some very hard questions.

Why many in Ukraine oppose a ‘land for peace’ formula to end the war

“In every single engagement that I personally participate in with the public,” said General Shanahan, “people want to talk about ethics – which is appropriate.” 

A shifting landscape

Hiring an ethicist was not his first impulse, General Shanahan acknowledged. “We wouldn’t have thought about this a year ago, I’ll be honest with you. But it’s at the forefront of my thinking now.” 

He wasn’t developing killer robots, after all. “There’s a tendency, a proclivity to jump to a killer robot discussion when you talk AI,” he said. But the landscape has changed. At the time, “these questions [of ethics] really did not rise to the surface every day, because it was really still humans looking at object detection, classification, and tracking. There were no weapons involved in that.” 

Given the killing potentially involved in the “AI for maneuver and fires” project, however, “I have never spent the amount of time I’m spending now thinking about things like the ethical employment of artificial intelligence. We do take it very seriously,” he said. “It’s core to what we do in the DOD in any weapon system.”

In the race to attract students, historically Black colleges sprint out front

Pentagon leaders repeatedly emphasize they are committed to keeping “humans in the loop” in any AI mission that involves shooting America’s enemies. Even so, AI technology “is different enough that people are nervous about how far it can go,” General Shanahan said. 

While the Pentagon is already bound by international laws of warfare, a JAIC ethicist will confront the thorny issues around “How do we use AI in a way that ensures we continue to act ethically?” says Paul Scharre, director of the technology and national security program at the Center for a New American Security.

It will be the job of the ethicist to ask the tough questions of a military figuring out, as General Shanahan puts it, “what it takes to weave AI into the very fabric of DOD.” 

Overseas competition 

Doing so will involve mediating some seemingly disparate goals: While most U.S. officials agree that it is important to develop the military’s AI capabilities with an eye toward safeguarding human and civil rights, these same leaders also tend to be voraciously competitive when it comes to protecting U.S. national security from high-tech adversaries who may not abide by the same ethical standards.

General Shanahan alluded to this tension as a bit of a sore spot: “At its core, we are in a contest for the character of the international order in the digital age.” This character should reflect the values of “free and democratic” societies, he said. “I don’t see China or Russia placing the same kind of emphasis in these areas.”

This gives China “an advantage over the U.S. in speed of adoption [of AI technology],” General Shanahan argued, “because they don’t have the same restrictions – at least nothing that I’ve seen shows that they have those restrictions – that we put on every company, the DOD included, in terms of privacy and civil liberties,” he added. “And what I don’t want to see is a future where our potential adversaries have a fully AI-enabled force – and we do not.”

Having an ethicist might help mediate some of these tensions, depending on how much power they have, says Patrick Lin, a philosophy professor specializing in AI and ethics at California Polytechnic State University in San Luis Obispo. “Say the DOD is super-interested in rolling out facial recognition or targeting ID, but the ethicist raises a red flag and says, ‘No way.’ What happens? Is this person a DOD insider or an outsider? Is this an employee who has to worry about keeping a job, or a contractor who would serve a two-year term then go back to a university?”  

In other words, “Will it be an advisory role, or will this person have a veto?” The latter seems unlikely, Professor Lin says. “It’s a lot of power for one person, and ignores the political realities. Even if the JAIC agrees with the AI ethicist that we shouldn’t roll out this [particular AI technology], we’re still governed by temporary political leaders who may have their own agenda. It could be that the president says, ‘Well, do it anyway.’”

An ethics of war

Ethicists will grapple with “Is it OK to create and deploy weapons that can be used in ethically acceptable ways by well-trained and lawyered-up U.S. forces, even if they are likely to be used unethically by many parties around the world?” says Stuart Russell, professor of computer science and a specialist in AI and its relation to humanity at the University of California, Berkeley. 

To date, and “to its credit, DOD has imposed very strong internal constraints against the principal ethical pitfalls it faces: developing and deploying lethal autonomous weapons,” Professor Russell adds. Indeed, Pentagon officials argue that beyond the fact that it does not plan to develop “killer robots” that act without human input, AI can decrease the chances of civilian casualties by making the killing of dangerous enemies more precise. 

Yet even that accuracy, which some could argue is an unmitigated good in warfare, has the potential to raise some troubling ethical questions, too, Professor Lin says. “You could argue that it’s not clear how a robot would be different from, say, a really accurate gun,” and that a 90% lethality rate is a “big improvement” on human sharpshooters.

The U.S. military experienced a similar precision of fire during the first Gulf War, on what became known as the “highway of death,” which ran from Kuwait to Iraq. Routed and hemmed in by U.S. forces, the retreating Iraqi vehicles – and the people inside them – were being hammered by American gunships, the proverbial “shooting fish in a barrel,” Professor Lin says. “You could say, ‘No problem. They’re enemy combatants; it’s fair game.’” But it was “so easy that the optics of it looked super bad and the operation stopped.” 

“This starts us down the road to the idea of fair play – it’s not just a hangover from chivalry days. If you fight your enemy with honor and provide some possibility for mercy, it ensures the possibility for reconciliation.” In other words, “we have ethics of war,” Professor Lin says, “in order to lay the groundwork for a lasting peace.”