Feds Ban Rite Aid’s Use of AI Facial Recognition Technology
The FTC banned pharmacy chain Rite Aid from using AI-powered facial recognition software to prevent shoplifting after the software illegally tagged women and people of color.
The ban will last for the next five years to settle claims that the shoplifting prevention technology was biased against women and people of color. Most of the systems installed between 2012 and 2020 were located in San Francisco, Los Angeles, and Sacramento.
The chain filed for bankruptcy, closing 31 stores across California. Rite Aid plans to shutter 154 of roughly 2,000 stores nationwide, with most closures in California, New York, and Pennsylvania. They used technology from two unnamed vendors to identify customers who had shoplifted from the store in the past or engaged in other problematic behavior. However, non-white customers were routinely flagged by the systems, which matched photos to a database of often grainy images linked with names, dates of birth, and other personal information. Black, Asian, Latino, and female consumers were likely to be harmed by Rite Aid’s facial recognition technology.
The technology performed so poorly that Rite Aid told employees that just one pictured person had entered more than 130 Rite Aid locations from coast to coast more than 900 times in less than a week. Rite Aid largely denied the allegations but agreed to stop using the technology in the immediate future.
A report from the nonprofit Fight for the Future found that retailers, including Home Depot, Macy’s, and Albertson’s, also use some form of facial recognition technology, primarily to control shoplifting.
The FTC said that Rite Aid ignored how false positives might negatively affect its customers. It failed to test the system for accuracy, uploaded low-quality images, and neglected to train its staff on using the technology properly. The company also did not tell customers it was using the technology in its stores and discouraged employees from informing customers about it.