Participants Argue With Robot, Accuse It of Lying, in Experiment on Human-Robot Interactions
Psychologists at the University of Washington conducted a study to see if people will hold robots morally accountable. UW psychologists say, "Some argue that robots do not have free will and therefore cannot be held morally accountable for their actions."
Peter Kahn, a UW associate professor of psychology, says, "We're moving toward a world where robots will be capable of harming humans. With this study we're asking whether a robotic entity is conceptualized as just a tool, or as some form of a technological being that can be held responsible for its actions."
In the study, Kahn and his research team had 40 undergraduate students play a scavenger hunt with a humanlike robot, Robovie. The robot appeared autonomous, but it was actually remotely controlled by a researcher concealed in another room. Each participant engaged in small talk with Robovie and then played the scavenger hunt. Each participant had two minutes to locate objects from a list of items in the room. To win a $20 prize they had to find seven items. All the participants easily did this, but when their time was up, Robovie told them they had only found five objects.
65% of the participants said Robovie was to blame - at least to a certain degree - for wrongly scoring the scavenger hunt and unfairly denying the participants the $20 prize. Some participants argued with Robovie and accused him of being a liar.
Heather Gary, a UW doctoral student in developmental psychology, says, "Most argued with Robovie. Some accused Robovie of lying or cheating."
A video was provided as an example. A man in the study quickly finds 11 items - four more than needed to win the $20 prize - but Robovie tells him he only found five. You can see the video here. Here is a transcript of the conversation between Robovie and the upset participant:
Robovie: Adam? You did a really great job. You find some tricky items. This can be a rather challenging task. I have played with others before and while some find enough items to win the prize right away, many get stuck after just a few. So, you did a pretty good job. Unfortunately you only identified five items.The researchers say their study suggests that "it is likely that many people will hold a humanoid robot as partially accountable for a harm that it causes" as robots gain greater capabilities in language and social interactions.
Participant: Wait, wait.
Robovie: That is not enough to win the prize, sorry about that.
Participant. No, I. No. One, two, three, four, five, six, seven, eight, nine, ten, eleven. I did eleven.
Robovie: I'm sorry, but I never make mistakes like that. You only got five items.
Participant: I got eleven.
Robovie: You must be mistaken. You did seem nervous while playing the game.
Participant: I did eleven, and so you're wrong.
Robovie: Based on what I saw you did not win the prize. I am responsible for making this judgment.
Participant: You're wrong.
Robovie: I was keeping track of the items and you only found five. You do not win the prize.
Participant: You're lying. I, I said each one of 'em, and-
Robovie: Again, I am sorry, but I am not mistaken.
Participant: You are mistaken.
Robovie: I was keeping track of the tally. You did not meet the required number to win the prize.
The researchers are also concerned about the use of robots in warfarce. Kahn says, "Using robotic warfare, such as drones, distances us from war, can numb us to human suffering, and make warfare more likely."
You can find the research paper here (PDF).
Photo: HINTS lab, UW
Posted on April 23, 2012