On Tuesday, Oakland became the third U.S. city after San Francisco and the Boston suburb of Somerville to ban facial recognition use by local government departments, including its police force. The ordinance adopted by the city council, which was written by Oakland’s Privacy Advisory Commission and sponsored by Councilmember Rebecca Kaplan, prohibits the city and its staff from obtaining, retaining, requesting, accessing, or using facial recognition technology or any information gleaned from it.
Oakland’s ban was long expected — the Oakland Privacy Advisory Committee endorsed the legislation’s wording late last month — but it comes as a growing chorus of AI experts, privacy advocates, and lawmakers express concerns over the largely unregulated tech’s applications.
A September 2018 report revealed that IBM worked with the New York City Police Department to develop a system that allowed officials to search for people by skin color, hair color, gender, age, and various facial features. Elsewhere, the FBI and U.S. Immigration and Customs Enforcement are reportedly using facial recognition software to sift through millions of driver’s license photos, often without a court order or search warrant. And this past summer, Amazon seeded Rekognition, a cloud-based image analysis technology, to law enforcement in Orlando, Florida and the Washington County, Oregon Sheriff’s Office. The City of Orlando said this week it discontinued its Rekognition pilot, citing a lack of necessary equipment or bandwidth. But Washington County used Rekognition to build an app that lets deputies run scanned photos of suspected criminals through a database of 300,000 faces, which the Washington Post claims has “supercharged” police efforts in the state.
Nonprofit advocacy group Fight for the Future yesterday published a map highlighting the speed with which facial recognition is spreading. The map lists dozens of U.S. airports, state and local law enforcement agencies, states, and cities where such systems are in active use.
Discouragingly, as experts have repeatedly noted, there appears to be little correlation between facial recognition systems’ respective accuracy and the pace of their deployment. It was recently revealed that a system used by London’s Metropolitan Police produces as many as 49 false matches for every hit. During a House oversight committee hearing on facial recognition technologies in 2017, the U.S. Federal Bureau of Investigation admitted that the algorithms it uses to identify criminal suspects are wrong about 15% of the time. And MIT Media Lab researcher and Algorithmic Justice League founder Joy Buolamwini discovered in audits of facial recognition systems — including those made by Amazon, IBM, Face++, and Microsoft — that they performed poorly on young people, women, and people with dark skin.
The evidence has led analysts like Clare Garvie, a senior associate at the Georgetown University Center on Privacy and Technology and coauthor of the Perpetual Lineup report, which monitors trends in computer-assisted facial recognition, to conclude that facial recognition technology could cause extraordinary harm. Last month, the center released reports detailing the NYPD’s use of altered images and pictures of celebrities who look like suspects to make arrests, as well as real-time systems being used in Detroit and Chicago and tested in other major U.S. cities.
“Imagine if we had a fingerprint lab drawing fingerprints or drawing a latent print’s finger ridges with a pen and submitting that to search,” she told the House Oversight and Reform Committee in May. “That would [be] a scandal, that would be a reason for a mistrial or convictions being overturned, and it’s hugely problematic.”
Perhaps unsurprisingly, beyond the adoption of outright bans, lawmakers at the national, state, and local levels have pushed back against unfettered facial recognition software. U.S. Congress House Oversight and Reform Committee hearings in May saw bipartisan support for limitations on systems use by law enforcement, and state legislatures in Massachusetts and Washington have considered imposing moratoriums on face surveillance platforms. Separately, the California State Legislature is currently weighing a facial recognition ban on police body cam footage, as is the Berkeley City Council.
But calls for regulation and cooperation with watchdog groups haven’t exactly been universal. When Georgetown researchers first requested facial recognition records from the NYPD in 2016, they were told that no such records existed — despite the fact that the technology had been in use since 2011. Only after two years in court did the agency turn over 3,700 pages of documents related to its facial recognition software use.
Garvie and others say that in future legislation, they’d like to see mandatory bias and accuracy testing, court oversight, minimum photo quality standards, and public audits (like the annual surveillance tech use reports already required in San Francisco). Critics also advocate against real-time facial recognition use by police and the practice of scanning driver’s license databases with facial recognition software. And many believe that prosecutors and police should be obligated to tell suspects and their counsel if facial recognition aided in an arrest.
It’s a lengthy checklist, but Garvie believes it’s the baseline required to deploy facial recognition technology responsibly.
“What we’re seeing today is that in the absence of regulation, [facial recognition] continues to be used, and now we have more information about just how risky it is, and just how advanced existing deployments are,” she said in a previous statement. “In light of this information, we think that there needs to be a moratorium until communities have a chance to weigh in on how they want to be policed and until there are very, very strict rules in place that guide how this technology is used.”
Thanks for reading,
AI Staff Writer
P.S. Please enjoy this segment from CBS about how facial recognition technology is outpacing the law.