The Placebo Effect of AI
A new study from Aalto University in Finland suggests that people perform better when they use AI because they expect to perform better. During the research, participants completed a simple letter recognition exercise. They did the task twice: once on their own and once with the help of an AI system… or so they thought.
Researchers told half of the participants that the system was reliable and would enhance their performance and the other half that it was unreliable and would worsen their performance. The AI system didn’t exist.
The participants paired letters that popped up on a screen at varying speeds. Surprisingly, both groups performed the exercise more efficiently, quickly, and attentively when they thought AI was involved.
“What we discovered is that people have extremely high expectations of these systems, and we can’t make them AI doomers simply by telling them a program doesn’t work,” says Assistant Professor Robin Welsch.
After initial experiments, the team conducted an online replication study with similar results. They invited participants to describe their expectations of performing with an AI. Most had a positive outlook toward AI, even the skeptics.
This attitude could represent a challenge for methods generally used to evaluate emerging AI systems. Because of AI’s placebo effect, it is difficult to evaluate programs that promise to help. This also presents a significant challenge for research on human-computer interaction, as expectations would influence the outcome unless placebo control studies were used.
“These results suggest that many studies in the field may have been skewed in favor of AI systems,” concludes Welsch.