Artificial Intelligence

AI facial recognition: Threats demand regulatory response barring vendors’ voluntary restraint

Published on October 20, 2021

The automatic photo tagging feature on social media is quick and pretty accurate but not so secure. Most concerning, the data on social networks is publicly available and is being scraped for use in privately-held, AI facial recognition databases–without your consent. The unregulated collection and use of biometric data is a risk that’s compounded by the rise in cyberattacks.

With facial recognition technology becoming more widespread, your facial signature could end up in numerous places. You cannot be sure who might have access to your biometrics. Some might counter that any damage generated by data privacy abuses would be outweighed by the technology’s beneficial implementations.

True, facial recognition systems have done wonders around us, from helping law enforcement find missing children and arrest dangerous criminals to proving your identity at airports and banks. Yet in these cases, too, facial recognition shortcomings pave the way to abuse, this time in the form of civil rights violations.

As we’ll see below, the data privacy, civil rights, and other threats posed by facial recognition must be addressed to avoid doing more harm than good.

High-stakes application of AI facial recognition

Facial recognition databases play a significant role in law enforcement. Law enforcement agencies routinely collect mugshots from those who have been arrested and compare them to local, state, and federal facial recognition databases. For example, the FBI has over 650 million photos traced from several state databases.

Facial recognition matches the faces of people walking past these special cameras with images of people on a watch list. Watch lists contain pictures of anyone, including people who are not suspected of any misconduct, and the images can be sourced from anywhere—even from our social media accounts. For example, any photo tagged with a person’s name becomes part of Facebook’s database, which may also be used for facial recognition.

How the data is acquired matters

With a vision of better equipping law enforcement with cutting-edge technologies, Clearview AI revealed its extensive database of over 10 billion images. The company also disclosed that any of its users can upload a person’s picture, and the software could then potentially reveal that person’s identity. 

An investigation of Clearview AI by The New York Times in early 2020 revealed it has been mining online pictures to build a vast facial recognition database. Initially, Clearview AI claimed that its app was only meant to be used by law enforcement and a limited number of private companies. But the Times article made it clear that the company has consistently misrepresented the extent of its operations and aspirations.

BuzzFeed News reported that a leaked Clearview AI client list had references to hundreds of police departments and federal agencies in the United States, including ICE and Customs and Border Protection. The leaked list also included a startling assortment of buyers like the NBA, Best Buy, Walmart, and Macy’s. Although several companies have now distanced themselves from Clearview AI, a few of them said that they had conducted trial runs and nothing more.

Following this, the start-up has continued to face a slew of lawsuits. The complaints filed in France, Austria, Greece, Italy, and the United Kingdom say that the company’s method of collecting and documenting data, including images of faces it automatically extracts from public websites, violates European privacy laws.

Subduing Clearview AI

Clearview AI’s tool has faced severe criticism from tech organizations as well as United States authorities. Despite platforms like Google, Twitter, YouTube, Facebook, and LinkedIn sending cease and desist letters, Clearview CEO AI Hoan Ton-That says the company has a First Amendment right to use publicly available information for its system.

However, in August 2021, an Illinois state court ruled in favor of the American Civil Liberties Union (ACLU) in its fight against the tech start-up. The judge stated that Clearview AI cannot use the First Amendment as a defense against this suit and can continue its business only with Illinois residents’ consent.

The risk of unintentional biases

Similarly, Amazon’s cloud-based facial recognition tool, Rekognition, is the target of growing opposition nationwide. In a test conducted by the ACLU in 2018, the software incorrectly identified 28 members of Congress as criminals using a database of 25,000 mugshots.

Amazon criticized the results, claiming that the facial recognition system’s confidence threshold was lowered. But a year later, the ACLU of Massachusetts found that Rekognition falsely matched 27 New England professional athletes to mugshots. The ACLU found that both tests mismatched people of color.

Researchers at MIT’s Media Lab have found that facial recognition algorithms are more accurate at identifying white men than women and people of color because databases contain more data on white men, creating an unintended bias. The ACLU’s results validate this concern, given that nearly 40% of Rekognition’s false matches in the first test were for people of color even though they make up only 20% of Congress.

Amazon, IBM, and Microsoft pause their sale of AI facial recognition

Amazon has come under scrutiny after Rekognition showed bias against people of color. In response, the company implemented a one-year moratorium on police use of its facial recognition technology. However, in June 2020, Amazon published a surprise blog post that mentioned allowing the use of Rekognition for organizations like Thorn, the International Centre for Missing & Exploited Children, and Marinus Analytics to help rescue human trafficking victims and reunite missing children with their families.

Soon after, IBM and Microsoft announced that they would no longer provide police with their facial recognition technology because AI systems used in law enforcement need to be tested for bias. In May 2021, Amazon extended the moratorium on police use until further notice.

The current status of AI facial recognition

Facial recognition may be another erosion of personal privacy. Let’s evaluate this technology in terms of two factors: discrimination and privacy.

  1. Discrimination: Facial recognition is never 100% accurate. The error rate for children, women, and people of color is up to 35%, while the accuracy for white men is 99%. It can be argued that AI will progress and “diversify” faces over time, but for now, it puts ethnic minorities at risk.
  2. Privacy: Following the Biometric Information Privacy Act, the Commercial Facial Recognition Privacy Act was put before the US Senate in March 2019. These acts require companies to obtain the explicit consent of individuals before collecting their biometric information. Providers of facial recognition technologies and state organizations need to be mindful of these laws regarding personal privacy and the measures to be taken.

David Scalzo, one of the early investors in Clearview AI, said,

“I’ve come to a conclusion that because information constantly increases, there’s never going to be privacy. Laws have to determine what’s legal but you can’t ban technology. Sure, that might lead to a dystopian future or something, but you can’t ban it.”

We cannot overlook the breakthroughs in facial recognition. But with the amount of data that it collects and stores, concerns about facial recognition should not be ignored. If the technology vendors won’t restrain themselves voluntarily, let’s hope for legislation that compels them to address this privacy, discrimination, and other abusive issues.

Leave a comment

Your email address will not be published. Required fields are marked *

13 + = 22