We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
Facial recognition startup Clearview AI is best known for two things: its facial recognition algorithm that lets you upload a photo to compare with its database of potential matches and that the company created said database by scraping over 3 billion images from user profiles on Microsoft’s LinkedIn, Twitter, Venmo, Google’s YouTube, and other websites. Since The New York Times profiled Clearview AI in January, the company has been in the news a handful of times. None have been positive.
In early February, Facebook, LinkedIn, Venmo, and YouTube sent cease-and-desist letters to Clearview AI over the aforementioned photo scraping. Exactly three weeks later, Clearview AI informed its customers that an intruder accessed its client list and the number of searches each client conducted. The statements the company made at the time of each incident perfectly illustrate its irresponsibility.
“Google can pull in information from all different websites,” Clearview AI CEO Hoan Ton-That told CBS News. “So if it’s public, and it’s out there, and it could be inside Google’s search engine, it can be inside ours as well.”
Ton-That is right in saying that Google is a search engine that indexes websites. He is wrong in saying any public information is up for the taking. The difference between Google and Clearview AI is simple: Google knows most websites want to be indexed because webmasters provide instructions explicitly for search engines. Those that don’t want to be indexed can opt out.
I don’t know of any people who are providing their pictures to Clearview AI, nor instructions on how to obtain them. If most people were sending Clearview AI their pictures, the company wouldn’t have to scrape billions of them.
“Security is Clearview’s top priority,” Tor Ekeland, an attorney for Clearview AI, told The Daily Beast. “Unfortunately, data breaches are part of life in the 21st century. Our servers were never accessed. We patched the flaw, and continue to work to strengthen our security.”
Ekeland is right in saying that data breaches are a part of life in the 21st century. He is wrong in saying that Clearview AI’s top priority is security. If that were the case, the company wouldn’t store its client list and their searches on a computer connected to the internet. It also wouldn’t have a business model that hung on pilfering people’s photos.
Maybe it’s not surprising that a company that is proud of taking data without consent argues that a data breach is business as usual.
‘Strictly for law enforcement’
Let’s look at an even tighter time frame. Clearview AI has repeatedly said that its clients include over 600 law enforcement agencies. The company didn’t say that those agencies were its only clients, though. Until it did. On February 19, the CEO implied just that.
“It’s strictly for law enforcement,” Ton-That told Fox Business. “We welcome the debate around privacy and facial recognition. We’ve been engaging with government a lot and attorney generals. We want to make sure this tool is used responsibly and for the right purposes.”
On February 27, BuzzFeed found that people associated with 2,228 organizations included not just law enforcement agencies but private companies across industries like major stores (Kohl’s, Walmart), banks (Wells Fargo, Bank of America), entertainment (Madison Square Garden, Eventbrite), gaming (Las Vegas Sands, Pechanga Resort Casino), sports (the NBA), fitness (Equinox), and cryptocurrency (Coinbase). They created Clearview AI accounts and collectively performed nearly 500,000 searches. Many organizations were caught unaware their employees were using Clearview AI.
It took just eight days for one of Clearview AI’s core arguments — that its tool was only for helping law enforcement officials do their job — to fall apart.
Thievery, shoddy security, and lies are not the real problem here. They’re side stories to the bigger concern: Clearview AI is letting anyone use facial recognition technology. There are calls for the government to stop using the tech itself, to regulate the tech, and to institute a moratorium. Clearview AI will likely go through a handful more news cycles before the U.S. government does anything that might impact the NYC-based company.
There’s also no guarantee that there will be consequences for Clearview AI. While the startup is feeling pressure to do something (it is apparently working on a tool that would let people request to opt out of its database), that won’t be enough. We’re much more likely to see Clearview AI’s clients act first. In light of the latest developments, law enforcement agencies, companies that were not aware their employees were using the tool, and everyone in between will likely reconsider using Clearview AI.
We already know that facial recognition technology in its current form is dangerous. Clearview AI specifically plays fast and loose not just with the data that its business is built upon, but also the data that its business generates. We can’t predict Clearview AI’s future, but if the last two months have been any indication, the company’s public statements are going to keep coming up short. If history in tech tells us anything, that quickly growing snowball is going to stop very abruptly.
Update at 2:00 p.m. Pacific: Hours after this story was published, Apple disabled Clearview AI for iOS. Clearview AI had been violating Apple’s app distribution rules. Shocking.
ProBeat is a column in which Emil rants about whatever crosses him that week.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.