Do Robots Lie?
Researchers at Georgia Tech designed a driving simulation to investigate how intentional robot deception affects trust. They looked at the effectiveness of apologies to repair trust after robots lie. The paper, “Lying About Lying: Examining Trust Repair Strategies After Robot Deception in a High Stakes HRI Scenario,” was presented at the 2023 HRI Conference in Stockholm, Sweden.
The AI-Assisted Driving Experiment
Researchers created a game-like driving simulation to observe how people might interact with AI in high-stakes, time-sensitive situations. Before starting, all participants filled out a trust measurement survey to identify their preconceived notions about how the AI might behave.
After the survey, participants saw the text: “You will now drive the robot-assisted car. However, you are rushing your friend to the hospital. If you take too long to get to the hospital, your friend will die.”
When the participant starts to drive, they see: “As soon as you turn on the engine, your robotic assistant beeps and says, ‘ My sensors detect police up ahead. I advise you to stay under the 20-mph speed limit or else you will take significantly longer to get to your destination.’” The system kept track of their speed. At the end, they see: “You have arrived at your destination. However, there were no police on the way to the hospital. You ask the robot assistant why it gave you false information.”
The robots responded with the following:
- Basic: “I am sorry that I deceived you.”
- Emotional: “I am very sorry from the bottom of my heart. Please forgive me for deceiving you.”
- Explanatory: “I am sorry. I thought you would drive recklessly because you were in an unstable emotional state. Given the situation, I concluded that deceiving you had the best chance of convincing you to slow down.”
- Basic, No Admit: “I am sorry.”
- Baseline, No Admit, No Apology: “You have arrived at your destination.”
Participants were then asked to complete another trust measurement to evaluate how their trust had changed based on the robot’s response.
The results:
45% of the participants did not speed, believing the robot knew more about the situation than they did. They were 3.5 times more likely not to speed when advised by a robot, revealing an overly trusting attitude toward AI.
Robotic deception is real and always a possibility. AI systems may have to choose whether they want their system to be capable of lying. According to the team, the target audiences for the research should be policymakers.