By: Tasneem Mewa

This week, the Federal Trade Commission (“FTC”) prohibited Rite Aid from using and relying on facial recognition technology that misidentified customers and wrongly accused them of shoplifting based on matches to Rite Aid’s database. Matches often resulted in customers being accused, searched, and expelled from stores.

The statement by Commissioner Alvaro M. Bedoya offers a “baseline” model for “algorithmic fairness” which attempts to address misuse and prevent companies from engaging in similar practices. In Rite Aid’s case, this involves refraining from using this technology for five years, deleting the system’s collected biometric data, and ensuring transparency and accountability measures should they choose to adopt similar technology in the future.

The news about Rite Aid is both novel and unsurprising.

As in many other areas of privacy law, the U.S. falls behind E.U. counterparts in drafting and enforcing legislation. Where the E.U. has blanket bans on real-time facial recognition in public spaces and external audits for systems used in border control, local, state, and federal governments in the U.S. are only proposing legislation (some exceptions to this general trend include San Francisco and Cambridge which have banned the use of facial recognition by police and other agencies).

In a relatively barren privacy-law landscape, the FTC adopts a definitive stance against unfair AI in commercial settings. This is a clear signal to companies that they cannot and should not use facial surveillance methods as a cop out, even though it has become cheap and widely used. If they are to use it, companies should treat it as a major investment with compliance and transparency measures in place. However, a more comprehensive approach to regulating AI needs to come from both state and federal legislatures.

The need for legislative action is especially dire because Rite Aid’s AI misstep feels like old news. Beyond surveillance state issues, conversations around the ethics of AI in the workplace and AI in commercial settings have become more commonplace. In the workplace context, the Pew Research Center has found that the majority of American adults oppose AI to track facial expressions and are not convinced of its accuracy. Accuracy concerns fall along the same lines as those made evident in Rite Aid’s case – misidentifying women and people of color. There are a number of factors we can study to make sense of this bias, historic over-surveillance of communities of color, being one of them. Regardless, requiring deletion of biometric data in one’s database means these biases are baked into the technologies; technologies which have not proven to be an objective counterpart to human (unconscious) biases.

So, what does this all mean? Voters must ask themselves what they are willing to accept when it comes to biometric and surveillance technology. How ubiquitous should it be and how should it be limited? Evidently, there are real consequences to the incorrect accusations thrown out by these technologies – at best, you’re humiliated in front of other patrons, at worst you’re imprisoned for walking into a store. Where there are already numerous pitfalls and mistakes in the criminal justice system (see Glynn Simmons’ story in the New York Times), we cannot allow unfettered technologies to create more.

About the Author:

Tasneem Mewa earned her undergraduate degree in Critical International Development Studies from the University of Toronto, Scarborough. During her studies, she worked for a research and policy organization in Bengaluru focusing on privacy, tech, and data issues in India and across Asia. As a student at the Chicago-Kent College of Law, Tasneem has had the opportunity to work in a litigation clinic, as a judicial extern, and within a law firm setting. She hopes her career will involve exploring new areas of law and contributing to policy.