Rite Aid Prohibited from Utilizing Facial Recognition Technology Following Erroneous Identifications of Shoplifting Perpetrators

With the FTC’s increasing focus on the misuse of biometric surveillance, Rite Aid fell firmly in the government agency’s crosshairs. And companies such as Clearview AI, meanwhile, have been hit with lawsuits and fines around the world for major data privacy breaches around facial recognition technology. The FTC’s latest findings regarding Rite Aid also shines a light on inherent biases in AI systems. Additionally, the FTC said that Rite Aid failed to test or measure the accuracy or their facial recognition system prior to, or after, deployment. “The allegations relate to a facial recognition technology pilot program the Company deployed in a limited number of stores,” Rite Aid said in its statement.

Rite Aid’s use of facial recognition software has been forbidden by the Federal Trade Commission for a span of five years. This decision comes as the FTC discovered that the company’s “reckless implementation of facial surveillance systems” not only brought humiliation to its customers, but also put their personal information in jeopardy.

The FTC’s Order, which awaits approval from the U.S. Bankruptcy Court following Rite Aid’s Chapter 11 bankruptcy filing in October, also dictates that the company must delete all images collected during its facial recognition rollout and any products created from those images. Additionally, Rite Aid is required to establish a comprehensive data security program to safeguard any personal data it collects.

A previous Reuters report revealed that the drugstore chain had covertly introduced facial recognition systems in approximately 200 U.S. stores between 2012 and 2020, primarily in “lower-income, non-white neighborhoods” where the technology was tested.

As the FTC continues its scrutiny on the misuse of biometric surveillance, Rite Aid became a prime target for the government agency. Among the allegations is that Rite Aid, in collaboration with two contracted companies, developed a “watchlist database” consisting of images of customers who were suspected of criminal activity at one of its stores. These images, which were often low-quality and captured from CCTV or employees’ smartphones, were used to identify potential threats.

When a customer entered a store who supposedly matched an existing image on its database, employees would receive an automatic alert instructing them to take action – typically to “approach and identify,” meaning they would verify the customer’s identity and ask them to leave. However, many of these “matches” were actually false positives, leading to incorrect accusations, resulting in “embarrassment, harassment, and other harm” for the customers, according to the FTC.

The complaint filed by the FTC also states that Rite Aid failed to inform customers about the use of facial recognition technology and even instructed employees to keep this information secret from customers.

Face-off

Facial recognition software has become one of the most controversial aspects of our AI-powered surveillance culture. In recent years, many cities have issued bans on the technology, while politicians have been fighting to regulate its use by law enforcement. Companies, such as Clearview AI, have faced lawsuits and fines around the world for major data privacy breaches involving facial recognition technology.

The FTC’s latest findings concerning Rite Aid also exposes the inherent biases present in AI systems. According to the FTC, Rite Aid failed to mitigate risks to certain consumers based on their race – the technology was “more likely to generate false positives in stores located in plurality-Black and Asian communities than in plurality-White communities,” as noted in their findings.

Furthermore, the FTC found that Rite Aid did not conduct any testing or measurement of the accuracy of their facial recognition system before or after its deployment.

In a press release, Rite Aid expressed its satisfaction at reaching an agreement with the FTC. However, the company disagreed with the core allegations made against them.

“The allegations relate to a facial recognition technology pilot program the Company implemented in a limited number of stores,” Rite Aid stated. “The use of this technology was ceased in these specific stores over three years ago, prior to the FTC’s investigation into its implementation.”

Avatar photo
Zara Khan

Zara Khan is a seasoned investigative journalist with a focus on social justice issues. She has won numerous awards for her groundbreaking reporting and has a reputation for fearlessly exposing wrongdoing.

Articles: 847

Leave a Reply

Your email address will not be published. Required fields are marked *