Are you ready to bring more awareness to your brand? Consider becoming a sponsor for The AI Impact Tour. Learn more about the opportunities here.


Predictive policing sounds like something out of a new sci-fi movie written to highlight the dangers of our algorithmic, data-driven modern world, but it’s not. Nor is it a possible tool currently relegated to research and development, or a distant goal for the future. Instead, predictive policing is just what it sounds like: an algorithm designed to predict when people are likely to commit crimes, sometimes before they commit them, and the New Orleans Police Department has secretly used the technology for years.

As you can imagine, the plan did not go smoothly, but we can learn some important lessons about technology adoption and public policy from the incident.

The full scoop

Late last month, The Verge covered the story: The New Orleans Police Department (NOPD) had begun using a tool from Silicon Valley data-mining giant Palantir to comb through social media and criminal history data to predict the likelihood that a given individual would commit (or be the victim of) a violent act. Working in secret, NOPD used the tool as far back as 2012 to identify members of gangs like 3NG and the 39ers, including one 22-year-old man named Evans “Easy” Lewis. During Lewis’ trial, NOPD produced more than 60,000 pages of documents generated by the Palantir partnership — yet made no mention of the source (or the partnership in general).

It’s no longer a secret. We now have public access to the original agreement, dating back to February 2012, which the NOPD has renewed three times. The agreement was set to expire at the end of February 2018 unless renewed, but in the wake of the controversy, New Orleans Mayor Mitch Landrieu’s office has decided not to renew the partnership again.

VB Event

The AI Impact Tour

Connect with the enterprise AI community at VentureBeat’s AI Impact Tour coming to a city near you!

 

Learn More

Crime forecasting isn’t exactly new, and Palantir has created patents for large-scale crime-forecasting systems in the past. The company has even sold these systems to foreign intelligence services, hoping to predict and prevent acts of terrorism. However, its main line of service has been working with companies to better understand and market to their customers.

Now, defense attorneys in the case of Kentrell Hickerson, who may have been convicted using evidence gathered by the Palantir-backed predictive policing partnership, have filed a motion that the evidence was gathered in violation of Hickerson’s rights. If the appeal goes through, it could completely undermine any evidence gathered by the partnership (even in other cases).

What we can learn

So what can we learn from this situation?

  1. Predictive policing doesn’t work well on a local level. AI programs like this may have a place in predicting terrorism and other national threats, but they don’t work well on a local level. NOPD may have been able to gather extra evidence to prosecute existing gang members, but research from the RAND Corporation confirms that existing predictive policing technology isn’t effective. For example, Chicago has been using a “heat list” for several years to identify people most likely to commit or be the victim of acts of violence, yet only three of the 405 homicide victims between 2013 and 2014 were on the list (less than 1 percent).
  2. Algorithms reinforce systemic biases. We also need to consider the systemic biases at play in a system like this. AI algorithms are only able to analyze the data we give them, so if human police officers have a racial bias and unintentionally feed skewed reports to a predictive policing system, it may see a threat where there isn’t one. The phenomenon is well-documented, and because NOPD relies on field interview cards (FICs) as part of its database, racial and other biases may play a huge role in what the predictive policing algorithm identified.
  3. There’s more in play than we realize. Almost nobody in New Orleans — including city officials — knew that the police department was using this AI system. It’s the latest proof that there’s probably more technology in play at any given moment than we realize, and a perfect example of why we should be careful about what personal information we disclose and how we disclose it.
  4. Transparency will be key moving forward. Defense attorneys are pushing to have evidence gathered by the predictive policing system thrown out, in large part because the tool was used secretly, without the knowledge of the defendant or the defense attorneys. While the appeal may or may not succeed, this is a key lesson: Organizations can and should use any similar algorithms transparently in the future, with police departments, city officials, individuals, and attorneys all in the loop about how these algorithms work.
  5. Convictions and punishments aren’t everything. Finally, we should recognize that this predictive policing system is only used to flag and punish the people most likely to commit crimes. While the punitive element of our justice system is important, it doesn’t lead us closer to any new reforms, nor does it give high-risk individuals a reasonable chance to improve their behavior or learn the lessons from their past. If anything, it may support an endless cycle of recidivism.

Predictive policing is a natural result of our ever-advancing AI technology and our obsession with heightened security, so it’s unlikely to go away anytime soon. Its use by the NOPD was flawed, to say the least, but this incident highlights some important issues surrounding AI — and the transparency of the public organizations that may be using it.

Anna Johansson is a freelance writer who contributes to Forbes, Inc., Entrepreneur, and The Huffington Post.

VentureBeat's mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.