24-0123/

Blog

Clearview AI: The database for photos from social media

Clearview AI’s facial recognition app operates in the legal grey area of many countries.
24-0123/
  • Technology

Clearview AI uses a facial recognition software to create a database of billions of publicly available photos of human faces. The photos for this index were provided from social media and other online sources. With the help of Clearview AI software, other publicly available images of the same person can be found using a single facial photo and linked to personal data such as social media profiles, name, place of residence and occupation.

facial recognition

In 2024, Clearview AI cooperates with many law enforcement agencies, particularly in the United States. The USA is also Clearview AI’s main market, according to CEO Hoan Ton-That.

“While there has been tremendous demand for our service from around the world, Clearview AI is primarily focused on providing our service to law enforcement and government agencies in the United States,” he told BuzzFeed News in 2021.

In 2019, the Indiana State Police was the first official customer. This was followed by the Department of Homeland Security.

According to an investigation by Buzzfeed, authorities from various EU countries have also at least tried out the software.

What is Clearview AI?

Clearview AI acts like a biometric search engine. The company uses its software to index around 10 billion images (the target is 100 billion). These indexed images come from the internet and social media channels: Facebook, Instagram, LinkedIn, X etc. and are analysed using the company’s own AI tool. This identification aid enables Clearview AI to support criminal prosecutions in the USA, for example.

Clearview AI in the legal grey zone

The American company is highly controversial as it operates in a certain grey area from a legal perspective. In Europe, there are several countries that have taken Clearview AI to court and that have been partially vindicated. Nevertheless the court cases did not always result in the use of the photos for the database being prohibited.

For example, the UK took Clearview AI to court in 2023 and the company was ultimately sued for £7.5 million in June 2023. Clearview AI appealed and was upheld in October of the same year, and the 7.5 million pound fine was withdrawn. Online media, such as Politico.eu, write “The UK is powerless.”

The Austrian data protection authority also decided that it was illegal for Clearview AI to process Austrian data. Nevertheless, no fine was imposed and no general ban was issued against the American company in Austria. However, the Austrian data protection authority did mention that it could issue the ban at a later date.

There were further complaints to data protection authorities in France, Italy and Greece. Greece fined the American company 20 million euros (as of July 2022), as did Italy a few months earlier (May 2022). France imposed a further fine of 20 million Euros on Clearview AI. (as of May 2023)

In addition, police or government-related organisations from 24 countries used the American facial recognition tool outside the USA. These include organisations such as universities, police authorities and also public prosecutors. According to research by BuzzFeed, people in management positions did not know that employees were using this tool.

As previously described, the American company Clearview AI operates in a legal grey area with its services. Here we discuss why the indexing of so many “private”, or rather “personal”, photos is not categorised as illegal:

  • Use of publicly available information
    Clearview AI collects images from publicly available internet sources. The company claims that it is operating within the bounds of legality by using information that is already publicly available, although this practice has raised privacy concerns.
facial recognition - Clearview
  • Lack of comprehensive regulations regarding facial recognition
    When Clearview AI became known, many countries did not have specific laws governing the use of facial recognition technology. It was difficult to definitively categorise Clearview AI as illegal because there was no clear legal framework for this technology.
  • Partnerships with law enforcement agencies
    Clearview AI entered into partnerships with law enforcement agencies that local authorities have in some cases deemed legal. Law enforcement agencies claim that facial recognition technology can be a useful tool to identify and locate individuals in criminal investigations.
  • Ongoing legal challenges:
    While Clearview AI has faced legal challenges, lawsuits and regulatory scrutiny, the legal proceedings are ongoing. Until a court or regulatory agency makes a specific ruling or a law is enacted, Clearview AI’s legal status remains in flux.
  • First Amendment Thoughts:
    Clearview AI has asserted that the use of public images for facial recognition falls under First Amendment rights and that the collection and analysis of publicly available information are protected activities.

Facial recognition in the Russia-Ukraine war

Kiew bei Nacht, Ukraine

Ukraine has been using the Clearview AI app to identify Russian casualties since mid-March 2022. The implementation took place very quickly: Ukraine only became aware of the tool at the beginning of the month and the CEO offered to make Clearview AI available to Ukraine free of charge. This was also implemented by mid-March 2022 and was gratefully accepted by Ukrainian politicians.

A smartphone is used to photograph the face of the deceased and the photo is then immediately fed into the database. Within a short space of time, the artificial intelligence links facial features with the existing photos from the database and identifies the deceased soldier. The Russian soldier’s family members are then contacted to inform them that the soldier has now died. Allegedly, the number of people identified in this way is very high. Mykhailo Fedorov, Ukraine’s Deputy Prime Minister, shared this information with Reuters in March 2022. Some experts refer to the use of such apps as electronic warfare, as soldiers actively participate in EC with the help of AI. You can read more about electronic warfare in our blog “Electronic warfare”.

Cybersecurity in Ukraine

The Ukrainian Cyber Alliance (UCA) and other IT experts conducted a two-month investigation at the end of 2017 to assess the level of protection of Ukrainian public resources and whether officials are handling IT security responsibly. Many vulnerabilities were discovered in the government’s information systems. The activists openly reported these vulnerabilities to those who put their data at risk (e.g. through weak passwords or lack of updates). The activists noted that public shaming of public authorities was the most effective way to generate attention. For example, it was found that the computer of the Main Directorate of the National Police in the Kiev region was accessed without a password and that a network drive with 150 GB of information, including passwords, plans, protocols and personal data of police officers, was found.

In a public document by the European Media Platform “What’s wrong with Diia“, on which members of the UCA also worked, the state app Diia is sharply criticised. The Diia app was launched in 2020 and enables Ukrainian citizens to use digital documents on their smartphones instead of physical documents for identification and exchange. Some security gaps are criticised, including the fact that the access PIN can be disabled or that some older people use 1111 as their PIN as they are not used to more complex PINs.

Wrap up

Clearview AI brings many advantages in terms of law enforcement and identification of individuals when the human eye fails. It is very easy for such databases to fall into the hands of the wrong people and for the data to be used for malicious purposes. Therefore, data protection must continue to take centre stage – it is up to the respective countries to follow suit with regulations and laws for facial recognition software.