Facial Recognition Technology Comes Under Scrutiny

By Patrick McKnight, Klehr Harrison Harvey Branzburg LLP

 

Facial recognition technology is coming under scrutiny in a new bill under consideration by Congress. The Facial Recognition and Biometric Technology Moratorium Act (S. 4084) was introduced by Senators Ed Markey of Massachusetts and Jeff Merkley of Oregon, and Representatives Pramila Jayapal of Washington and Ayanna Pressley of Massachusetts.

 

What is Facial Recognition Technology?

Facial recognition uses data to create a biometric map of the human face. Once the data is collected, algorithms analyze incoming images for unique facial features and dimensions to find a match.

 

This might sound like a legal issue for the future, but facial recognition technology is far more widespread than many attorneys or members of the general public realize. The commercial market for facial recognition technology is exploding. Current commercial uses range from advertising and marketing to security and surveillance. By 2022, some estimates predict the facial recognition market will be worth nearly $9.6 billion. Facial recognition is already being tested at large, public sporting events in the United States.

 

Facial recognition requires a database of information. Social media companies and smart phone applications can more easily obtain this information when users voluntarily upload their photos to these platforms. Studies show users rarely read digital terms of service agreements.

 

Facial recognition is also increasingly used by law enforcement, a fact which some civil liberties groups say should be a cause for concern. Recent studies indicate many facial recognition systems are less accurate in correctly identifying people with darker skin tones. These studies, combined with increased scrutiny of police tactics in the wake of the George Floyd killing, have raised concerns facial recognition technology could be used in a discriminatory manner.

 

The Facial Recognition and Biometric Technology Moratorium Act

The Facial Recognition and Biometric Technology Moratorium Act contains several provisions aimed at addressing the concerns with facial recognition technology, including:

  • Federal agencies would be prohibited from using biometric tools, including facial and voice recognition, until Congress acts to lift the moratorium.
  • No Federal funds could be used to develop biometric surveillance systems.
  • Data collected in violation of the Act would be barred from use in judicial proceedings.
  • A private right of action would be created for individuals when their biometric data is used in violation of the Act.
  • Restrictions on federal funding to state and local entities which have not enacted their own moratoria on biometric data.
  • State attorney generals would have the ability to enforce the Act.
  • State and local governments would be allowed to enact additional restrictions on the use of biometric data.

 

Facebook Settles Class Action for $650 Million Regarding Its Facial Recognition Technology

In late January 2020, Facebook announced it would pay a $550 million to settle a class action suit over its use of facial recognition technology. In July 2020, the settlement was increased to $650 million.

 

The action was filed under the Illinois Biometric Information Privacy Act (BIPA). BIPA was enacted in 2008, making Illinois the first state to pass legislation aimed at protecting biometric data. Among other provisions, BIPA requires that consent must be obtained prior to the collection of a user’s biometric data. Because BIPA allows for $1,000 to $5,000 for each violation of the law, a verdict could have exposed Facebook to billions in damages. This settlement is likely the largest facial recognition case in history.

 

The allegations arose from Facebook’s “Tag Suggestion” service. This is essentially a photo-labeling service that suggests the names of individuals in photos. Facebook used information from “tagging” where users identify themselves and other Facebook users in photos. This information was put into a database until Facebook had enough data to automatically recognize the faces of users.

 

Facebook isn’t the only app on your phone gathering biometric data. Instagram, owned by Facebook, recently reached 1 billion monthly active users. Unlike Facebook, Twitter and other platforms, Instagram is based exclusively around sharing photos. Users may be unaware that sharing their latest selfie adds to Instagram’s facial recognition database. This disclosure is buried deep within the Data Policy section of Instagram’s Terms of Service:

Face recognition: If you have it turned on, we use face recognition technology to recognize you in photos, videos and camera experiences. The face-recognition templates we create may constitute data with special protections under the laws of your country. Learn more about how we use face recognition technology, or control our use of this technology in Facebook Settings. If we introduce face-recognition technology to your Instagram experience, we will let you know first, and you will have control over whether we use this technology for you.

 

Several other popular smart phone apps have been criticized for the improper use of facial recognition technology. Most recently, the Clearview app has recently come under fire by privacy advocates. According to its website, Clearview AI is “a new research tool used by law enforcement agencies to identify perpetrators and victims of crimes.” Clearview AI “scrapes” publicly available photos from social media accounts. Clearview AI contracts with law enforcement agencies, and until May 2020, also sold this information to private companies. Clearview promised to voluntarily terminate all contracts with entities based in Illinois after it was also sued under BIPA.

 

How are Technology Companies Addressing Privacy Concerns?

In response to these recent controversies, several of the largest technology companies have announced restrictions on their development of facial recognition technology. On June 8, 2020, IBM announced it would no longer develop facial recognition technology.

 

“IBM firmly opposes and will not condone uses of any [facial recognition] technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency,” IBM said in the letter. “We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”

 

On June 10, 2020, Amazon announced a one-year moratorium on the police use of its facial recognition program. The Amazon technology (“Rekognition”) is part of their AWS platform and offers features including face search, verification, detection, and analysis. In a brief statement, Amazon announced:

“We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.”

 

On June 11, 2020, Microsoft announced it would no longer permit the police use of its facial recognition technology.

 

Conclusion

Facial recognition may be the next frontier of privacy legislation. It remains to be seen whether The Facial Recognition and Biometric Technology Moratorium Act will gain momentum in Congress.

 

In the interim, there are several steps users can take to protect their privacy depending on their level of concern. Users concerned about protecting biometric data should avoid apps which require uploading a photo. Also, avoid tagging yourself and others in social media posts. Finally, users can review their privacy settings and disable facial recognition features.

 

Unfortunately, reigning in the use of facial recognition technology by law enforcement and other government agencies isn’t quite as easy. Other governments, most notably China, paint a disturbing picture of how biometric data can be used by authorities in the absence of appropriate legal safeguards.

 

For now, increasing public awareness about the legal issues associated with sharing biometric data is an important first step.

 


Patrick McKnight is an Associate in the Litigation Department of Klehr Harrison in Philadelphia, Pennsylvania. He is a member of the firm’s Data, Privacy, and Cybersecurity practice group.

About: PBA Cybersecurity and Data Privacy

The Pennsylvania Cybersecurity and Data Privacy Committee analyzes cybersecurity issues and educates PBA members about legal, regulatory and industry standards that preserve the confidentiality of protected information.


Leave a Reply

Your email address will not be published. Required fields are marked *