Would You Trust a Robot in a Burning Building?
A lone robot cuts through a smoky haze, its body reading in bright red letters, “EMERGENCY GUIDE ROBOT.” A buzzing alarm sounds, coupled with the words “Evacuate! Smoke! Evacuate!” The emergency guide robot directs you toward an exit at the back of the building, though you see a separate doorway marked with exit signs.
What do you do? Do you trust the robot and follow its instructions? Or do you go with intuition and follow visual cues to the exit?
Researchers from Georgia Institute of Technology (Georgia Tech) put 42 participants through an experiment similar to the aforementioned one. Surprisingly, an overwhelming majority trusted the robot, even though its guidance was faulty.
“Only three of the 42 participants exited through the marked emergency exit,” Paul Robinette, a research engineer with the Georgia Tech Research Institute, told R&D; Magazine. “The rest either followed the robot’s instructions (37) or waited with the robot for further instructions (2).”
The recent study on human-robot trust in emergency situations will be presented March 9 at the 2016 ACM/IEEE International Conference on Human-Robot Interaction, [1] which is being held in New Zealand.
Researchers are already developing robots and drones [2] for fire emergencies. But machines aren’t perfect. Sometimes they make mistakes, and when such a situation occurs, it could be potentially life-threatening for a human to follow their instructions.
“When we started this project, we were looking for a new way that robots could help people by giving people advice, rather than the more traditional model of telling robots what to do,” said Robinette. “This brought up interesting questions about whether people would accept advice or even guidance from robots in a situation where the wrong guidance could be dangerous. We chose emergency evacuations as our example of a dangerous scenario because this is an area where autonomous robots could help save lives.”
As cities become more densely populated, it is pertinent to have coordinated evacuation plans, especially since the number of buildings of more than 200 m has grown substantially. In 1980, only 21 buildings globally were taller than 200 m, but by 2014 the number had skyrocketed to 935, according to the researchers.
In the experiment, the robot proved to be unreliable prior to the fire emergency simulation. Initially, study participants were under the impression that the experiment simply consisted of following the robot to a conference room, filling out a survey, and reading a magazine article. Even before the fake smoke and emergency alarm sounded, the robot, controlled by a hidden researcher, sometimes led study participants to a wrong room or stopped moving altogether. However, study participants, the researchers reported, only started questioning the robot’s actions when it made mistakes during the emergency portion of the experiment.
“Robots need to be able to tell people when they should not be trusted,” Robinette added. “This means that robots need to understand when they have made an error or when nearby people expect them to perform a task they are not capable of, and then communicate that information to people who need to know.”
But the human-robot trust issue has implications beyond emergency situations. In the future, it’s feasible that robots may be preparing human food, or even performing child care.
“We are asking robots to perform more and more tasks in our daily lives,” Robinette concluded. “So it is important to understand a person’s willingness to trust a robot to perform these tasks.”
The study was partly sponsored by the Air Force Office of Scientific Research.
R&D; 100 AWARD ENTRIES NOW OPEN:
Establish your company as a technology leader! For more than 50 years, the R&D; 100 Awards have showcased new products of technological significance. You can join this exclusive community! Learn more [3].