The use of automated decision-making systems by the District of Columbia (D.C.) government is having widespread impact on the accuracy, fairness, and equity of decisions that affect District residents, according to a report released on Nov. 1 by a prominent privacy advocacy group.

The report – entitled ‘Screened and Scored in the District of Columbia’ and published by the Electronic Privacy Information Center (EPIC) – explains that the D.C. government routinely outsources critical government decisions to automated decision-making systems in areas such as public benefits, policing, and housing. That practice, the report asserts, affects the accuracy, fairness, and equity of some decisions that affect city residents.

In the face of those findings, EPIC argued that D.C. needs to do more to increase transparency of how automated systems work, and to give city residents recourse to challenge the recommendations that those systems make.

According to the report, the D.C. criminal justice system uses an automated decision-making tool called the risk assessment instrument. Information about a defendant’s criminal history, employment status, and demographic is fed into the system. Then, during pretrial hearings, the risk assessment instrument predicts how likely the defendant is to get arrested again or fail to appear at trial.

“The risk assessment instrument automatically applies different weights to each piece of information and aggregates a risk score,” the report says.

But these automated decision-making systems have “perpetrated algorithmic harm in ways that aren’t equitable,” EPIC said.

The report also analyzes the Metropolitan Police Department’s use of ShotSpotter technology, which uses a series of publicly installed acoustic sensors to detect gunshots and send police to the location of the gunfire.

EPIC said, however, these sensors often yield no evidence of gun-related crimes. In addition, it said, police departments often have placed these sensors in predominantly Brown and Black neighborhoods.

“False ShotSpotter alerts send police – expecting an armed suspect – into communities whose members are already more likely to be harmed or killed by officers. ShotSpotter has the potential to contribute to, rather than alleviate, violence,” the report says.

The report also argues that automated decision-making systems require more oversight and transparency to better protect the rights of D.C. residents. For example, the D.C. Housing Authority uses an algorithm to screen the criminal history of applicants and their likelihood of making rent payments on time. The agency also uses algorithms to assist caseworkers in determining which applicant is most in need and who gets housing first.

“People impacted by automated decision-making systems are rarely given much information about how they work, [and] it is hard to determine whether these systems are accurate,” the report says.

Plus, the lack of transparency in automated decision-making systems makes challenging those decisions difficult for constituents, it says. “You have a right to due process when the government [decides] your fundamental rights, such as your eligibility for public benefits,” EPIC said.

To address these concerns, the report recommends that D.C. government agencies offer more transparency into the decision-making process of automated systems. It also recommends that government agencies do not wholly depend on the results of automated systems.

“Automated decision-making systems should have similar protective mechanisms that ensure accuracy, fairness, and equity. But many do not: they make decisions without much oversight or input, decisions that are difficult to challenge, and [unfair decisions],” the report states. “When government agencies assume automated decision-making systems are accurate, it can be difficult to hold agencies accountable for the harm these tools cause.”

Read More About
About
Lisbeth Perez
Lisbeth Perez
Lisbeth Perez is a MeriTalk State and Local Staff Reporter covering the intersection of government and technology.
Tags