We are excited to bring Transform 2022 back in-person July 19 and virtually July 20 - 28. Join AI and data leaders for insightful talks and exciting networking opportunities. Register today!
A confluence of nationwide events have put the long-running fight over facial recognition and policing into sharper focus. The movement to defund law enforcement is gaining momentum in the wake of the police killing of George Floyd and the subsequent police brutality against protestors. So powerful is the movement that three major tech companies — IBM, Amazon, and Microsoft — have, to varying degrees, promised not to sell their facial recognition technology to law enforcement. New York City just passed the POST Act, which requires the NYPD to disclose the surveillance technologies it’s using. At the federal level, Congressional representatives including Jimmy Gomez (D-CA) and Rashida Tlaib (D-MI) are bringing legislative pressure to bear on facial recognition. Just yesterday, the Boston City Council banned facial recognition, joining a few other locales in Massachusetts and California that have done the same. And today, members of Congress introduced a bill that would ban federal funding for facial recognition or other biometric surveillance.
Though this battle is raging nationwide, look no further than the city of Detroit to find a clear illustration of all the real-world challenges, implications, failures, and victories around facial recognition and policing.
A local fight of national importance
Detroit is home to Project Greenlight, a controversial police surveillance program in which cameras all over the city keep an eye on the populace. Project Greenlight allows police to use facial recognition software on captured still images and video footage to match suspects’ faces with a database. Systems like these are proven to perform worse on women and people with darker skin, which is all the more egregious in Detroit, the “Blackest city in the America,” with a population that is close to 80% Black.
Recently, under acute pressure from local activists including Detroit Will Breathe and the Detroit Community Technology Project, the Detroit City Council tabled a decision on renewing some of its police surveillance-related software and data contracts.
There were actually three such items on the agenda for the Detroit City Council’s June 16 meeting. One pertained to extending the contract for DataWorks Plus, which is the Detroit PD’s primary facial recognition software provider. The agenda item was deleted, but investigative reporter Allie Gross of the Detroit Free Press acquired a screenshot of it. The contract was initially for three years and upwards of $1.2 million, and the proposed extension would add $219,934.50. The original contract expires on July 24.
The other two items had to do with funding for CLEAR Investigative Services, a Thompson Reuters database service that the Detroit police uses in investigations. Together, the two items totaled $421,543.64. The two CLEAR items remained on the agenda, but in the end, the Council never voted on them in its June 16 meeting. None of the three items made it onto the agenda for the City Council’s June 23 meeting.
Absent a contract renewal, the fate of the Detroit Police Department’s facial recognition efforts could be in limbo. One of the local activist groups that could ostensibly claim some responsibility for the city council’s hesitation, Detroit Will Breathe, has a larger mandate to defund the police. “Defunding the police” means different things to different people, but broadly, it’s about diverting resources away from police departments and instead investing in building up communities and providing more social services. The activists seem to be getting their message across; even Michigan governor Gretchen Whitmer, who is White, spoke favorably on the record about defunding the police, in terms of reallocating resources.
As expected, an injustice
Police abuse of facial recognition is still theoretical in many places, but not in Detroit. To perfectly illustrate the problems inherent in the system, yesterday the New York Time’s Kashmir Hill delivered an in-depth report about the false arrest of Robert Julian-Borchak Williams. After a robbery at a jewelry store, Michigan State Police uploaded a still image from the store’s security camera and used facial recognition software to match it with Williams’ driver’s license photo. Like millions of other records, the image is stored in the state’s Statewide Network of Agency Photos (SNAP) image database. The Detroit police arrested Williams, who is Black, based on the match.
The American Civil Liberties Union (ACLU) of Michigan took up Williams’ case and sent a letter of complaint to Detroit’s Office of the Chief Investigator. The Wayne County prosecutor dropped the charges. In response to Hill’s article, the prosecutor’s office issued a statement explaining why it dropped the case; in sum, it was due to shoddy police work that included violating the protocols on facial recognition use that the Detroit PD and the prosecutor’s office previously agreed upon.
The statement includes a note from county prosecutor Kym Worthy: “In the summer of 2019, the Detroit Police Department asked me personally to adopt their Facial Recognition Policy. I declined and cited studies regarding the unreliability of the software, especially as it relates to people of color. They are well aware of my stance and my position remains the same. Any case presented to my office that has utilized this technology must be presented to a supervisor and must have corroborative evidence outside of this technology. This present case occurred prior to this policy. Nevertheless, this case should not have been issued based on the DPD investigation, and for that we apologize.”
Rank One, the company whose facial recognition software was used in Williams’ case, pledged in an email to Reuters that it “will add a legal means to revoke any use of our software that violates our Code of Ethics and conduct a technical review of additional safeguards we can incorporate into our software to prevent any potential for misuse.”
In an article he penned for the Washington Post, Williams wrote, “I never thought I’d have to explain to my daughters why Daddy got arrested. How does one explain to two little girls that a computer got it wrong, but the police listened to it anyway?”
Williams’ bogus arrest is a clear example of what those who are opposed to police use of facial recognition have been worried about. The technology did not work properly, and the police failed to follow their own protocols that are designed to ameliorate such a technological failure. That the county prosecutor saved Williams by tossing the case is not proof that the system works. This is still a matter of police using facial recognition to abuse their power over a Black person. And it doesn’t matter if the police intended malice or not; Williams is an innocent man who was falsely accused of a crime, arrested in front of his family, and questioned in police custody. He spent a night in jail and later had to defend himself in court.
The Williams case is infuriating and exasperating, but it also neatly illustrates and validates activists’ and politicians’ protestations about the dangers of facial recognition in policing. The Detroit City Council’s de facto moratorium on re-funding its police department’s facial recognition program may presage real progress in the effort to end the practice altogether. It appears that national and local fervor and political pressure are effecting change in facial recognition technology use; the way it all continues to play out in Detroit could be a microcosm of what happens across the nation.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Learn more about membership.