Artificial Intelligence will be used to arrest and imprison millions of innocent people

AI experts from top universities SLAM ‘predictive policing’ tools in new statement and warn technology could ‘fuel misconceptions and fears that drive mass incarceration’

Both police and judges have relied on algorithms to predict crime and recidivism. But, experts warn it could have major consequences. Stock image

  • AI experts say pre-crime algorithms are more magic than reality
  • Algorithms designed to predict violent crime may come with consequences
  • Experts say they may vastly overstate the likelihood of pretrial crime
  • They warn its use could fuel mass incarceration and lead to harsher sentences 

Prominent thinkers in the fields of artificial intelligence say that predictive policing tools are not only ‘useless,’ but may be helping to drive mass incarceration. In a letter published earlier this month the experts, from MIT, Harvard, Princeton, NYU, UC Berkeley and Columbia spoke out on the topic in an unprecedented showing of skepticism toward the technology.  ‘When it comes to predicting violence, risk assessments offer more magical thinking than helpful forecasting,’ wrote AI experts Chelsea Barabas, Karthik Dinakar and Colin Doyle in a New York Times op-ed. Both police and judges have relied on algorithms to predict crime and recidivism. But, experts warn it could have major consequences. Stock image Predictive policing tools, or risk assessment tools, are algorithms designed to predict the likelihood of someone committing crime in the future. With rapid advances in artificial intelligence, the tools have begun to find their way into the everyday processes of judges, who deploy them to determine sentencing, and police departments, who use them to allot resources and more.

While the technology has been positioned as a way to combat crime preemptively, experts say its capabilities have been vastly overstated.  Among the arenas most affected by the tools they say, are pretrial sentencing, during which people undergoing a trial may be detained based on their risk of committing a crime. ‘Algorithmic risk assessments are touted as being more objective and accurate than judges in predicting future violence,’ write the researchers. ‘Across the political spectrum, these tools have become the darling of bail reform. But their success rests on the hope that risk assessments can be a valuable course corrector for judges’ faulty human intuition.’ Experts say the tools have a tendency to overestimate accused peoples’ risk of violence when in fact, the likelihood of crimes committed during trials is small. Algorithms are at a disadvantage when it comes to pretrial crime according to experts, since the rate is so small. According to the the op-ed, 94 percent of people accused of a crime in Washington D.C. are released and only 2 percent of those people are arrested for violent crime afterward. According to the National Institute of Justice, predictive policing is: ‘[Harnesses] the power of information, geospatial technologies and evidence-based intervention models to reduce crime and improve public safety.’ It is also used to assess the likelihood that someone facing trail will commit another crime and whether or not they should be detained. AI experts have derided the algorithms’ use, saying that they’re prone to vastly overstating the likelihood of violent crime. However, researchers point out that it’s not uncommon for states to detain 30 percent of people awaiting trial. ‘[The tools] give judges recommendations that make future violence seem more predictable and more certain than it actually is,’ write the researchers.  ‘In the process, risk assessments may perpetuate the misconceptions and fears that drive mass incarceration.’ One of the most prominent tools used be judges is called the Public Safety Assessment, which like many other tools, crunches numbers based on criminal history, personal characteristics. The tool flags a person based as candidate for ”new violent criminal activity’ or not. For the technology to truly be accurate, experts say it should predict almost all people are at zero risk, given the low statistical likelihood. ‘Instead, the P.S.A. sacrifices accuracy for the sake of making questionable distinctions among people who all have a low, indeterminate or incalculable likelihood of violence,’ say experts. To help better prevent crime, researcher suggest easing reliance on algorithms and putting resources into more holistic measures. ‘Policy solutions cannot be limited to locking up the ‘right’ people,’ the write.  ‘They must address public safety through broader social policies and community investment.’