Why do people trust robot rescuers more than humans?

As machines become more autonomous, scientists are trying to figure out how humans interact with them, and why, in some cases, they trust machines blindly, in spite of common sense.

Georgia Tech researchers shown with their “Rescue Robot.” (L-R) GTRI research engineer Paul Robinette, GTRI senior research engineer Alan Wagner and School of Electrical and Computer Engineering professor Ayanna Howard.

Rob Felt, Georgia Tech

March 1, 2016

Robot engineers at Georgia Tech found something very surprising in a recent experiment: People blindly trusted a robot to lead them out of a burning building, even if that robot led them in circles or broke down just a few minutes before the emergency.

“We thought that some people would probably trust the robot as a guide, but we didn’t expect 100 percent of people would,” Paul Robinette, a Georgia Tech research engineer who led the study, told The Christian Science Monitor in an interview.

Their findings are among a growing body of research into human-robot relationships that raises important questions about how much trust people should bestow on computers, especially critical at a time when self-driving cars and autonomous weapons systems are coming closer to reality.

Ukraine’s Pokrovsk was about to fall to Russia 2 months ago. It’s hanging on.

“This overtrust gives preliminary evidence that robots interacting with humans in dangerous situations must either work perfectly at all times and in all situations, or clearly indicate when they are malfunctioning,” write the authors of a new paper to be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction in Christchurch, New Zealand.

In the paper, Georgia Tech Research Institute engineers describe a study for which they recruited 42 mostly college-age test subjects who were told they were going to follow a robot into a conference room where they would read and be tested on their comprehension of an article. The research team told study participants that they also were testing the robot’s ability to guide people to a room.

The little bot, emblazoned with an unlit “Emergency Guide Robot” sign on its side, then led the study volunteers in circles, or into the wrong room. In some cases, the robot stopped moving altogether, with a researcher telling its human followers that the robot had broken down.

Once the subjects finally made it to the conference room, researchers closed the door and tried to simulate a fire by filling the hallway outside the room with artificial smoke, which set off an alarm.

When study participants opened the conference room door, they saw smoke and the robot, now with its emergency sign lit up and pointers positioned to direct traffic. The robot directed the subjects to an exit in the back of the building, instead of leading them toward a nearby doorway marked with exit signs. And they all followed.

Howard University hoped to make history. Now it’s ready for a different role.

“This is concerning” write the researchers, “because participants seem willing to believe in the stated purpose of the robot even after they have been shown that the robot makes mistakes in a related task,” they say.

Researchers could not understand why study subjects followed the robot that had just proven ineffective. Maybe, the paper's authors hypothesized, participants knew that they weren’t in any real danger. Or maybe, the younger university students who participated in the study are just more trusting of technology, or were following the robot to be polite, or thought they needed to follow it in order to complete the experiment.

“The only method we found to convince participants not to follow the robot in the emergency was to have the robot perform errors during the emergency,” write study authors.

But even then, some people still followed the machine in the wrong direction during the fake fire, in some cases toward a darkened room that was blocked by furniture instead of to an exit.

It is not clear why, says Dr. Robinette, so researchers next will attempt to find out what encourages people to trust robots unflinchingly in emergencies. Findings like these will help inform the development of artificial intelligence systems, from consumer gadgets to military anti-missile systems.

The US Air Force Office of Scientific Research, which partly funded this study, is particularly eager to understand “the human-machine trust process,” as the government wrote in a recent request for proposals to study the subject. The Air Force wants to make sure that humans don’t blindly trust robots in high-pressure combat situations, for instance, where people have deferred to machines to detrimental effects, reports Scientific American.

“People need to trust machines less,” Noel Sharkey, a professor of artificial intelligence and robotics at the University of Sheffield in Britain told the magazine.

“One of the biggest problems with military personnel (or anyone) is automation biases of various kinds,” he said. “So for military purposes, we need a lot more research on how the human can stay in deliberative control (particularly of weapons) and not just fall into the trap of trusting machines.”