Would You Accept a Lie From a Robot?

I used to teach ethics at the university level. Never did I consider that a robot should have been in my class.

According to Andres Rosero, Ph.D. at George Mason and lead author of a Frontiers in Robotics and AI study, “I wanted to explore an understudied facet of robot ethics, to contribute to our understanding of mistrust towards emerging technologies and their developers. Because of generative AI, I felt it was important to begin examining possible cases in which anthropomorphic design and behavior sets could be utilized to manipulate users.”

The researchers selected three situations where robots currently work—medical, cleaning, and retail- and three deception behaviors. These were external lies about the world beyond the robot, hidden state deceptions, where a robot’s design hides its capabilities, and superficial state deceptions, where a robot’s design overstates its capabilities.

In the external state, a robot working as a caretaker for a woman with Alzheimer’s lies that her late husband will be home soon. In the hidden state deception, a woman visits a house where a robot housekeeper is cleaning, unaware that it is also filming. In the superficial state deception, a robot working in a shop as part of a study on human-robot relations, untruthfully complains of feeling pain while moving furniture, and a human asks someone else to take the robot’s place.

Participants read a scenario and then answered a questionnaire. They were asked if they approved of the robot’s behavior, how deceptive it was, if it could be justified, and if anyone else was responsible for the deception.

The participants disapproved most of the hidden state deception; the housecleaning robot with the undisclosed camera was considered the most deceptive. While they considered the superficial state deception to be moderately deceptive, they disapproved when a robot pretended it felt pain. This may have been perceived as manipulative.

“The benefit of using a cross-sectional study with vignettes is that we can obtain a large number of participant attitudes and perceptions in a cost-controlled manner,” explained Rosero. “Vignette studies provide baseline findings that can be corroborated or disputed through further experimentation. Experiments with in-person or simulated human-robot interactions are likely to provide greater insight into how humans actually perceive these robot deception behaviors.”

I’m relieved the robots hadn’t yet completed the necessary prerequisites to attend my class.

Leave A Reply

Your email address will not be published.