Clearview AI Scraps 30 Billion Images Illicitly, Giving Them to Cops

Clearview’s CEO has recently acknowledged the notorious facial recognition database, used by the law enforcement agencies across the nation, that was apparently built in part using 30 billion photos that were illicitly scraped by the company from Facebook and other social media users without their consent. Critics have dubbed this practice as creating a “perpetual police line-up,” even for individuals who did not do anything wrong. 

The company often boasts of its potential for identifying rioters involved in the January 6 attack on the Capitol, saving children from being abused or exploited, and assisting in the exoneration of those who have been falsely accused of crimes. Yet, critics cite two examples in Detroit and New Orleans where incorrect face recognition identifications led to unjustified arrests. 

Last month, the company CEO, Hoan Ton-That admitted in an interview with the BBC that Clearview utilized photos without users’ knowledge. This made it possible for the organization’s enormous database, which is promoted to law enforcement on its website as a tool “to bring justice to victims.” 

What Happens When Unauthorized Data is Scraped 

Privacy advocates and digital platforms have long criticized the technology for its intrusive aspects, with major social media giants like Facebook sending cease-and-desist letters to Clearview in 2020, accusing the company of violating their users’ privacy. 

“Clearview AI’s actions invade people’s privacy which is why we banned their

[…]
Content was cut in order to protect the source.Please visit the source for the rest of the article.

This article has been indexed from CySecurity News – Latest Information Security and Hacking Incidents

Read the original article: