EU Parliament told predictive policing software

 ... relies on dirty data generated by corrupt cops

(Mis)Uses of Technology

from the junk-data,-garbage-outcomes dept

Thu, Mar 26th 2020 3:49am — Tim Cushing

Predictive policing efforts continue to expand around the world. Fortunately, so has the criticism. The data witchcraft that expects a bunch of crap data created by biased policing to coalesce into actionable info continues to be touted by those least likely to be negatively affected by it: namely, law enforcement agencies and the government officials that love them.

The theory that people should be treated like criminals because someone else did some crimes in the area in the past is pretty specious, but as long as it results in temporary drops in criminal activity, fans of unreasonable suspicion will continue to use these tools that still have no long-term proven track record.

It's not just a US problem. It's a problem everywhere. The European Parliament has been asking for feedback on predictive policing efforts, which is more than most agencies in the US are willing to do. The Executive Director of the AI Now Institute, Andrea Nill Sanchez, recently testified during a public hearing on the issue, using the Institute's 2019 report on predictive policing to highlight everything that's wrong with turning law enforcement over to modeling algorithms. (via The Next Web)

The point of Sanchez's testimony [PDF] is this: we can't trust the data being fed to these systems. Therefore, we definitely can't trust the predictions being produced by them. Dirty cops create dirty data.

Despite what the term may suggest, predictive policing is neither magic nor a precise science that allows us to see into the future. Instead, predictive policing refers to fallible systems that use algorithms to analyze available data and aim to produce a forecasted probability of where a crime may occur, who might commit it, or who could be a victim.

Left unchecked, the proliferation of predictive policing risks replicating and amplifying patterns of corrupt, illegal, and unethical conduct linked to legacies of discrimination that plague law enforcement agencies across the globe.

This is drawn from AI Now's report, which pointed out some of the most corrupt police forces in the nation were feeding data from biased policing into systems that were now destined to generate nothing but more of the same corruption.

(1) Chicago, an example of where dirty data was ingested directly into the city’s predictive system; (2) New Orleans, an example where the extensive evidence of dirty policing practices and recent litigation suggests an extremely high risk that dirty data was or could be used in predictive policing; and (3) Maricopa County, where despite extensive evidence of dirty policing practices, a lack of public transparency about the details of various predictive policing systems restricts a proper assessment of the risks.

This is only one of several problems. First, almost every system used by law enforcement agencies is a closed system, unable to be inspected by anyone outside the companies that sell them and the agencies that use them. There's nearly no way for outsiders to vet data or outcomes. They can only judge the systems by the results. And those results are withheld or released pretty much at the sole discretion of the agencies using predictive policing software. If it's not producing decreases in crime rates, citizens will be the last to know.

Second, the data used is both tainted and selective, skewing the output towards the selective enforcement agencies already engage in.

[P]redictive policing primarily relies on inherently subjective police data, which reflects police practices and policies—not actual crime rates. Law enforcement exercises enormous discretion in how it carries out its work and collects data, including the crimes and criminals it overlooks.

Finally, there's no way to cleanse the systems of dirty data. Police departments are unable to recognize that their own biases might taint inputs. And citizens are powerless to challenge the data that's being used to target them for increased law enforcement scrutiny. With pushback effectively neutralized -- both by police practices and the secrecy surrounding proprietary algorithms -- agencies are just going to continue engaging in the same biased policing, only with the justification that the software told them to do it.

Since life and liberty is on the line for citizens put at the mercy of software being fed junk data, Sanchez suggests the following should be instituted (at a bare minimum) before predictive policing software is deployed:

As a first step, agencies considering using predictive policing tools should undertake Algorithmic Impact Assessments that include the following: (1) a self-assessment evaluating the system’s potential impacts on fairness, justice, and bias; (2) a meaningful external review process; (3) public notice and comment; and (4) enhanced due process mechanisms to challenge unfair, biased, or other harmful effects.

But that's just the start. The whole system has failed for years and reform from the ground up is needed before this software can be trusted in the hands of the police. Sanchez calls for the entire criminal justice system to be reformed, since it's the source of institutional racism and the enabler of some of the worst law enforcement behavior.

The negatives far outweigh the positives when it comes to predictive policing. The EU is taking the right step by asking for public input. Here in the US, public input is still an afterthought -- something that only happens after enough damage has been done. Turning law enforcement over to tainted data and proprietary algorithms poses a genuine threat to life and liberty. Hopefully, someone in the US will take a cue from the EU Parliament's actions and start asking law enforcement agencies tough questions about their predictive policing programs.

REGISTER NOW

By Tim Cushing
(Source: techdirt.com; March 26, 2020; https://tdrt.io/hPd)
Back to INF

Loading please wait...