Chicago’s recent attempts to reduce violent crime in the city might not be working as police hoped.
In light of a recent report from the RAND Corporation, the Chicago Police Department (CPD) is on the defense due to questions regarding the effectiveness of a crime-prediction system that Chicago has been testing since 2013.
The algorithm-based, predictive policing program generates a heat list, also known as a Strategic Subjects List (SSL), of people the system believes are most likely to kill or be killed.
“The goal is to ensure the individual is not only informed of the law enforcement consequences for deciding to engage or continue in gun violence, but also the devastating impact of gun violence within their community,” the CPD wrote three years ago in its pilot program directive.
The RAND Corporation report casts doubts on whether the prediction system is actually helping police quell violence in the city. Additionally, as Chicago and Illinois continue to struggle with budgetary issues, continuing investment in an unsuccessful program is likely to prove unpopular with residents.
For its report, the RAND Corporation was provided access to the system and found that it isn’t working as the CPD initially hoped. Instead of helping police officers identify high-risk Chicago residents, and thus respond more accurately with limited police resources, the system is being used as an after-the-fact suspect list once a shooting as already occurred, according to the report.
“Individuals on the SSL are not more or less likely to become a victim of a homicide or shooting than the comparison group, and this is further supported by city-level analysis,” the RAND Corporation wrote. “The treated group is more likely to be arrested for a shooting.”
In response to the new report, CPD Superintendent Eddie Johnson and Director Anthony Guglielmi released a lengthy statement that argued that the RAND Corporation report looked at an early version of the prediction model.
“The paper does not evaluate the prediction model itself [and reviews] our earliest person-based predictive model,” the statement said. “Since that time, the SSL model has undergone extensive refinement and repeated iterations. We are currently using SSL Version 5, which is more than three times as accurate as the version reviewed by RAND (Version 6 is in simultaneous development). Regarding this prediction model, repeated quantitative evaluations have shown that the model produces very accurate findings.”
According to CPD, “RAND only evaluated the first few months of the program, and the findings are no longer relevant.” The organization’s findings do “not support a conclusion that the tools and predictive models to support the strategy are somehow deficient,” the statement said.
While the RAND Corporation and the CPD disagree on the significance of the findings, the report does raise questions regarding new policing technology, as well as a citizen’s right to privacy.
In cities struggling with violent crime, such as Chicago, investing in new technology that promises to lower crimes rates can seem like a no-brainer. But, if the technology doesn’t reduce crime, it just amounts to wasting taxpayer dollars that could have been spent on other crime-reduction techniques.
Additionally, there are ongoing concerns with privacy rights activists.
“Using predictive policing might seem like an ingenious solution to fighting crime, but predictions from data algorithms can often draw inaccurate conclusions,” Renate Samson, chief executive of Big Brother Watch, a U.K.-based privacy rights group, told the BBC. “The police must exercise caution when using data to target people and be sure that they adhere to the rule of innocent until proven guilty.”
Cities across the country must strike a balance between trying new technologies to solve ongoing problems, and protecting citizens’ rights and safeguarding taxpayer dollars.