Join top executives in San Francisco on July 11-12, to hear how leaders are integrating and optimizing AI investments for success. Learn More
This week in AI, a new Pew Center poll shed light on Americans’ views of AI, including the use of facial recognition by police. In other news, the U.S. Justice Department revealed it hasn’t kept “specific record[s]” on its purchases of predictive policing technologies, a category of technologies that investigations have shown to be biased against minority groups.
Lured by the promise of reducing crime and the time to solve cases, law enforcement agencies have increasingly explored AI-powered tools like facial recognition, drones, and predictive policing software, which attempts to predict where crime will occur using historical data. According to Markets and Markets, police departments are expected to spend as much as $18.1 billion on software tools including AI-powered systems, up from $11.6 billion in 2019.
But the effectiveness of these systems has repeatedly been put into question. For example, an investigation by the Associated Press found that ShotSpotter, a “gunfire locater service” that uses AI to triangulate the source of firearm discharges, can miss live gunfire right under its microphones or misclassify the sounds of fireworks or cars backfiring. Extensive reporting by Gizmodo and The Markeup, meanwhile, has revealed that Geolitica (previously called PredPol), a policing software that attempts to anticipate property crimes, disproportionately predicts that crime will be committed in neighborhoods inhabited by working-class people, people of color, and Black people in particular.
Facial recognition, too, has been shown to be biased against “suspects” with certain skin tones and ethnicities. At least three people in the U.S. — all Black men — have been wrongfully arrested based on poor facial recognition matches. And studies including the landmark Gender Shades project have shown that facial recognition technology once marketed to police, including Amazon’s Rekognition, are significantly more likely to misclassify the faces of darker-skinned people.
Join us in San Francisco on July 11-12, where top executives will share how they have integrated and optimized AI investments for success and avoided common pitfalls.
But dichotomously, public support for facial recognition use by police is relatively high, with a plurality of respondents to a recent Pew report saying they agree with its deployment. The reason might be the relentless PR campaigns waged by vendors like Amazon, which have argued that facial recognition can be a valuable tool in helping to find missing persons, for instance. Or it might be ignorance of the technology’s shortcomings. According to Pew, respondents who’ve heard a lot about the use of facial recognition by the police were more likely to say it’s a bad idea for society than those who hadn’t heard anything about it.
Racial divisions cropped up in the Pew survey’s results, with Black and Hispanic adults more likely than white adults to say that police would definitely or probably use facial recognition to monitor Black and Hispanic neighborhoods more often than other neighborhoods. Given that Black and Hispanic individuals have a higher chance of being arrested and incarcerated for minor crimes and, consequently, are overrepresented in mugshot data — the data that has been used in the past to develop facial recognition algorithms — which is hardly surprising.
“Notable portions of people’s lives are now being tracked and monitored by police, government agencies, corporations and advertisers … Facial recognition technology adds an extra dimension to this issue because surveillance cameras of all kinds can be used to pick up details about what people do in public places and sometimes in stores,” the coauthors of the Pew study write.
Justice Department predictive policing
The Department of Justice (DOJ) is a growing investor in AI, having awarded a contract to Veritone for transcription services for its attorneys. The department is also a customer of Clearview, a controversial facial recognition vendor, where employees across the FBI, Drug Enforcement Administration, and other DOJ agencies have used it to perform thousands of searches for suspects.
But according to Gizmodo, the DOJ maintains poor records of its spending — especially where it concerns predictive policing tools. Speaking with the publication, a senior official said that the Justice Department isn’t actively tracking whether funds from the Edward Byrne Memorial Justice Assistance Grant Program (JAG), a leading source of criminal justice funding, are being used to buy predictive policing services.
That’s alarming, say Democratic Senators including Ron Wyden (D-OR), who in April 2020 sent a letter to U.S. Attorney General Merrick Garland requesting basic information about the DOJ’s funding of AI-driven software. Wyden and his colleagues expressed concern that this software lacked meaningful oversight, potentially amplified racial biases in policing, and might even violate citizens’ rights to due process under the law.
The fears aren’t unfounded. Gizmodo notes that audits of predictive tools have found “no evidence they are effective at preventing crime” and that they’re often used “without transparency or … opportunities for public input.”
In 2019, the Los Angeles Police Department, which had been trialing a range of AI policing tools, acknowledged in an internal evaluation that the tools “often strayed from their stated goals.” That same year, researchers affiliated with New York University showed in a study that nine police agencies had fed software data generated “during periods when the department was found to have engaged in various forms of unlawful and biased police practices.
“It is unfortunate the Justice Department chose not to answer the majority of my questions about federal funding for predictive policing programs,” Wyden said, suggesting to Gizmodo that it may be time for Congress to weigh a ban on the technology. Already, a number of cities, including Santa Cruz, California and New Orleans, Louisiana have banned the use of predictive policing programs. But partisan gridlock and special interests have so far stymied efforts at the federal level.
For AI coverage, send news tips to Kyle Wiggers — and be sure to subscribe to the AI Weekly newsletter and bookmark our AI channel, The Machine.
Thanks for reading,
Senior AI Staff Writer
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.