VentureBeat presents: AI Unleashed - An exclusive executive event for enterprise data leaders. Network and learn with industry peers. Learn More
The Coalition for Critical Technology (CCT) penned a letter opposing the publication of research called “A Deep Neural Network Model to Predict Criminality Using Image Processing.” At the time of publication the letter has more than 1,000 signatures from researchers, practitioners, academics, and others. According to a press release from Harrisburg University, the paper is slated for publication in a book series from Springer Publishing, and the letter urges readers to demand that Springer pull the paper and condemn the use of criminal justice statistics to predict criminality.
The use of algorithms in predictive policing is a fraught subject. As the CCT letter elaborates, criminal justice data is notoriously flawed. “Countless studies have shown that people of color are treated more harshly than similarly situated white people at every stage of the legal system, which results in serious distortions in the data,” the letter reads. One of the primary researchers on the Harrisburg University paper, Jonathan Korn, is a former NYPD officer.
According to Harrisburg University’s press release, the paper promises, “With 80% accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face. The software is intended to help law enforcement prevent crime.”
The notion of determining criminality from a person’s face is fundamentally racist and has deep roots in historical pseudoscience; using facial recognition technology is just a modern means of attempting to answer the same flawed question. The CCT’s letter begins by condemning the very idea of the researcher’s question: “Such claims are based on unsound scientific premises, research, and methods, which numerous studies spanning our respective disciplines have debunked over the years.”
Event
AI Unleashed
An exclusive invite-only evening of insights and networking, designed for senior enterprise executives overseeing data stacks and strategies.
This is not the first time the paper in question has appeared. In early May, Harrisburg University pushed out the same press release, only to take it down after harsh criticism. Motherboard kept an archived version of the release and reported that one of the researchers, Nathaniel J.S. Ashby, said in an email, “The post/tweet was taken down until we have time to draft a release with details about the research which will address the concerns raised.”
But it’s unclear what has changed since May. The “new” press release is identical to the first one, and Ashby declined to respond to VentureBeat’s questions about how they’ve changed the paper. VentureBeat asked Springer if it will accede to the CCT’s demands, but the publisher has not responded at this time. (Updates: On June 23, although Springer had not yet responded to VentureBeat, the company confirmed on Twitter that it will not publish the article. The company did not expound on its plans for any of the group’s other demands. On June 25, Springer did respond to VentureBeat stating that the paper was earlier rejected for publication.)
Here is Springer’s June 25 response to VentureBeat: “We acknowledge the concern regarding this paper and would like to clarify at no time was this accepted for publication. It was submitted to a forthcoming conference for which Springer will publish the proceedings of in the book series Transactions on Computational Science and Computational Intelligence and went through a thorough peer review process. The series editor’s decision to reject the final paper was made on Tuesday 16th June and was officially communicated to the authors on Monday 22nd June. The details of the review process and conclusions drawn remain confidential between the editor, peer reviewers and authors.”
An overarching issue is the inability of technology to determine things that are fundamentally socially constructed and defined. “Machine learning does not have a built-in mechanism for investigating or discussing the social and political merits of its outputs,” the letter asserts. This reflects Dr. Ruha Benjamin’s statement from a talk earlier this year in which she explained that “computational depth without historic or sociological depth is superficial learning.” Researcher Abeba Birhane further unpacks this notion in her award-winning paper “Algorithmic Injustices: Towards a Relational Ethics.”
The CCT’s letter is a rich resource for prior research on the topics of criminality, predictive policing, facial recognition, and related issues.
VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.