Faces in the Database: Clearview AI’s Costly Lesson in Privacy Compliance 

Faces in the Database: Clearview AI’s Costly Lesson in Privacy Compliance 

Article by Tsaaro

7 min read

Faces in the Database: Clearview AI’s Costly Lesson in Privacy Compliance 

Clearview AI’s Legal Violations in the Netherlands   

The Netherlands’ Data Protection Agency (DPA) issued a fine of 30.5 million euros (approx. $33.7 million) to the facial recognition startup Clearview AI. This fine was issued due to the startup’s creation of what was called by the agency an “illegal database” of billions of photos and faces. Clearview AI, Inc. is an American facial recognition company that primarily provides software to law enforcement and government agencies. The company’s algorithm matches faces to a database of more than 30 billion images collected from the internet, including social media applications. Founded by Hoan Ton-That and Richard Schwartz, the company’s first usage by law enforcement was reported in 2019

The DPA has warned Dutch companies that using Clearview’s services is banned. Till now, Clearview has not objected to this decision, eliminating the possibility of any appeal against such a fine. However, Clearview’s chief legal officer, Jack Mulcaire, emailed a statement to The Associated Press where he called the decision “unlawful, devoid of any due process” and “unenforceable.” The Dutch data protection watchdog stated that building the database and insufficiently informing people whose images appear in the database amounted to violations of the European Union’s General Data Protection Regulation (GDPR). DPA chairman Aleid Wolfsen mentioned the dangers of facial recognition since it is a highly intrusive technology. He explained how simply having a single picture on the internet meant that it could end up in the Clearview database and be tracked.  

The DPA further warned that if Clearview didn’t halt the breaches of the regulation, then other consequences would follow. This could mean noncompliance penalties amounting to as much as 5.1 million euros (approx. $5.6 million) in addition to the existing fine. However, Clearview does not fall under the EU data protection regulations. This is because Clearview AI does not have a place of business in the Netherlands or the EU, it does not have any customers in the Netherlands or the EU, and it does not undertake any activities that would otherwise mean it is subject to the GDPR. 

History of Clearview’s Privacy Violations 

However, this is not Clearview’s first fine for violating regulations. The French Data Protection Authority (CNIL) received complaints from individuals about Clearview AI’s facial recognition software and opened an investigation. In May 2021, the association Privacy International warned CNIL about this practice. The chair of the CNIL referred this matter to a restricted committee, which found Clearview to be in violation of the GDPR due to unlawful processing of personal data, not respecting individual’s rights and lack of cooperation with CNIL. The committee decided to impose a maximum financial penalty of 20 million euros (approx. $21 million) in accordance with Article 83 of the GDPR. Similarly, Italy’s data protection agency fined this facial recognition firm 20 million euros (approx. $21 million) for breaches of EU law. Specifically, it found that the personal data held by the company, including biometric and geolocation information, were processed unlawfully without an appropriate legal basis. On 23 May 2022, the UK’s Information Commissioner’s Office announced that it had fined Clearview AI, a US-based facial recognition firm, £7.5 million (approx. $9.4 million). The fine was imposed on the company over its illegal practice of collecting and storing pictures of individuals from social media platforms without the individuals’ consent. 

Dangers of Facial Recognition Technology 

Facial recognition technology (FRT) and artificial intelligence (AI) present several risks, particularly concerning privacy and surveillance: 

  1. Potential for fraud: Companies selling facial recognition software have compiled huge databases to power their algorithms – controversial Clearview AI, for example, has 3 billion images (scraped from Google, Facebook, YouTube, LinkedIn and Venmo) it can search against. But these systems are a real security risk. Hackers have broken into databases containing facial scans used by banks, police departments and defence firms in the past. 
  1. Predatory marketing: Software that analyzes facial expressions could potentially be used by some companies to prey on vulnerable customers. This could be done by segmenting extreme emotions—such as distress—and tailoring their products and services to these individuals. 
  1. Identity fraud: If facial data gets compromised, it poses huge threats to governments as well as ordinary citizens. If the security measures employed with facial recognition technology are not stringent enough, hackers can easily spoof other people’s identities to carry out illegal activities. This may result in huge financial losses if the data required for financial transactions gets stolen or duplicated.  
  1. Breach of privacy: The threat of technology intruding on an individual’s rights to privacy is perhaps the greatest threat created by the extensive use of facial recognition. Privacy is now a critical issue, so much so that in some cities across the likes of California and Massachusetts, law enforcement agencies are banned from using real-time facial recognition tools. Police instead are forced to rely on recorded video from tools like body-worn cameras.  

Violations of provisions under GDPR 

Clearview has violated various provisions under the EU’s privacy law, GDPR. Article 5 of the legislation discusses principles relating to the processing of personal data. These principles ensure data protection and are foundational to the legislation. Specifically, principals contained in articles 5(1) (a), (b) and (c) were held to be violated. These articles are: 

  1. processed lawfully, fairly and in a transparent manner in relation to the data subject (‘lawfulness, fairness and transparency’); 
  1. collected for specified, explicit and legitimate purposes and not further processed in a manner that is incompatible with those purposes; further processing for archiving purposes in the public interest, scientific or historical research purposes or statistical purposes shall, in accordance with Article 89(1), not be considered to be incompatible with the initial purposes (‘purpose limitation’); 
  1. adequate, relevant and limited to what is necessary in relation to the purposes for which they are processed (‘data minimisation’); 

Article 6 of the legislation talks about the lawfulness of processing and the laws and conditions which need to be followed when data is being processed. These are: 

  1. Consent  
  1. Contractual necessity  
  1. Legal obligation  
  1. Vital interests  
  1. Public task 
  1. Legitimate interest 

The company’s collection and use of biometric data from publicly available images were held to violate the principles of transparency and fairness. Moreover, Clearview AI’s collection of vast amounts of data not specifically consented to or necessary for a particular purpose breached the data minimization principle. 

Article 9 of the legislation sets a higher threshold for processing certain special categories of personal data. In the case of Clearview AI, the processing of biometric data, which falls under special categories of data, was conducted without explicit consent from individuals. This processing of biometric data was not covered by any of the lawful bases under Article 9, making it a significant breach of GDPR’s requirements for handling sensitive data. 

Article 12 contains provisions to ensure transparent information, communication and modalities for the exercise of the rights of the data subject. The article places obligation on the parties to ensure compliance with article 13, 14, 15, 16, 17, 18, 19, 20, 34. Clearview violated this article by not complying with the terms laid down in these articles.  

Article 13 of the GDPR requires that when personal data is collected directly from the data subject, the data controller must provide specific information to the data subject at the time of collection. Article 14 contains comprehensive guidelines regarding what information is to be provided where personal data has not been obtained from the data subject. These guidelines include an obligation on the data controller to provide: 

  1. the identity and the contact details of the controller 
  1. the contact details of the data protection officer, where applicable; 
  1. the purposes of the processing for which the personal data are intended as well as the legal basis for the processing; 
  1. the categories of personal data concerned; 
  1. the recipients or categories of recipients of the personal data, if any; 
  1. Notify individuals of data transfers and safeguards in third countries. 

This information is crucial for ensuring transparency and allowing individuals to understand and control how their data is used. In the case of Clearview AI, the company’s practices did not align with the requirements of Article 13 and 14. Clearview AI collected biometric data from publicly available images without adequately informing individuals about the collection, purposes, legal basis, or retention of their data. The lack of transparency regarding who was collecting the data, for what purposes, and how long it would be stored represents a significant breach of Article 13 and 14. 

Similarly, violations of article 15, 17, 27 and 31 have also been upheld. These articles deal with Right of access by the data subject, right to erasure (right to be forgotten), Representatives of controllers or processors not established in the Union, and Cooperation with the supervisory authority, respectively.  

Clearview AI was held to have violated GDPR by failing to grant individuals access to their data as required by Article 15 and not complying with deletion requests under Article 17. The company also neglected to appoint an EU representative as mandated by Article 27, complicating enforcement of GDPR compliance. Additionally, Clearview’s lack of cooperation with supervisory authorities, as required by Article 31, impeded regulatory oversight and enforcement, leading to further breaches of GDPR obligations. 

Conclusion  

Clearview AI’s extensive use of facial recognition technology has led to significant legal and regulatory repercussions, underscoring the critical need for adherence to data protection laws. The Netherlands’ Data Protection Agency’s substantial fine of 30.5 million euros highlights severe breaches of the GDPR. Clearview’s practices, characterized by a lack of transparency, failure to obtain explicit consent, and improper handling of biometric data, have been deemed unlawful. Similar penalties imposed by France, Italy, and the UK emphasize a broader pattern of non-compliance and the high stakes of privacy violations. 

The situation with Clearview AI serves as a stark reminder of the importance of stringent privacy regulations and the need for companies to align their practices with legal standards to protect individual rights. Additionally, the parallels with India’s Digital Personal Data Protection Act (DPDPA) reveal the global relevance of these issues, demonstrating that robust data protection measures are essential across jurisdictions. As technology evolves, so too must our commitment to safeguarding personal information and ensuring accountability for breaches. 

Shubham Bansal

Introduction   “My wife asked me why I was speaking so softly at home. I told her I was afraid Mark …

Shubham Bansal

Introduction  As Artificial Intelligence (AI) rapidly evolves and integrates into various aspects of daily life, ethical considerations related to AI …

Shubham Bansal

Clearview AI’s Legal Violations in the Netherlands    The Netherlands’ Data Protection Agency (DPA) issued a fine of 30.5 million euros …

Shubham Bansal

Introduction:   Recently, Uber was fined €290 million by the Dutch Data Protection Authority (AP) for violating the General Data Protection …

Shubham Bansal

Introduction:  Over the past decade, the world has witnessed a technological revolution, with Artificial Intelligence (AI) at the forefront of …

SHARE THIS POST

Would you like to read regular updates from Tsaaro.
Subscribe to our newsletter

Our Latest Blogs

Read what the latest hapennings in the cyber world are and learn what the
experts have to say about them