In August 2018, Keith White was arrested in Fayette County, Kentucky, for possession of cocaine, possession of marijuana, two traffic violations, and a gun charge. White’s case was assigned to District Court Judge Megan Thornton, who then faced a decision: Should White be released to await his trial at home, or should he remain in custody unless he could pay bail?
This is a decision judges across the country face every single day, with weighty consequences.
The legal purpose of bail is to protect the community from possible further criminal conduct and to incentivize defendants to appear in court. But setting bail has costs. If the individual does not have the money to post bail, they are put behind bars even though they have not been convicted. Even a few days in jail can have ruinous consequences, including job loss and trauma for the defendant’s family. It also increases the likelihood of conviction since people in jail are more likely to plead guilty.
In the case of Keith White, Judge Thornton set bail at $7,500, an amount White could not pay. He spent the next 83 days in jail before his trial.
Algorithms are all around
The decision about whether to set money bail is, at its heart, a prediction about what a defendant is likely to do in the future. This is a space where algorithms, which provide instructions to solve a problem or predict an outcome, can be helpful because they can be trained on thousands, even millions, of past events. As a result, algorithmic predictions can outperform human predictions.
While algorithmic-based rules have replaced human discretion in some settings, there are a number of high-stakes environments where algorithmic predictions interact with human decision-makers. Opportunity & Inclusive Growth Institute economist Alex Albright describes four types of decision-making environments, illustrated in Figure 1.
Spectrum of algorithm-based decision-making systems
[1] No algorithmic information given to humans |
[2] Algorithmic predictions given to humans |
[3] Algorithmic predictions + recommendations given to humans |
[4] Algorithm-based rules dictate outcomes |
Least reliant on algorithms | Most reliant on algorithms |
While scenarios 1 and 4 get the lion’s share of the scrutiny, scenarios 2 and 3 occur often—and in high-stakes environments. For instance, every day lenders decide whether to grant or deny loan applications. Algorithms may predict how likely an applicant is to default, and the firm may recommend denying applications below a certain threshold, but employees often make the final call. Algorithms are also used in mental health care settings to predict how likely a person is to harm themselves or others. Health care professionals can use that prediction to decide whether to hospitalize someone.
In the criminal legal system, algorithms have been used since at least the 1970s to assess a defendant’s risk before trial. This risk is communicated to the judge, who decides whether to set money bail or allow the defendant to be released (no money bail).
These decision-making environments share another feature: For the decision-maker, mistakes are visible only if they choose the “lenient” action and something bad happens (a loan is not repaid, a defendant does not appear for trial). On the other hand, mistakes are impossible to observe when a “harsh” action is selected instead. Would the applicant who was denied a loan have repaid it? Would the defendant held on bail have appeared for trial? This lopsidedness may push decision-makers to act more harshly in ways that are harmful to individuals and society. Loans to responsible applicants boost the economy. Releasing low-risk patients and defendants saves resources and can build trust in health care and criminal legal institutions.
In these complex decision-making environments, what effect do algorithms have? “The conventional wisdom is that algorithms impact human decisions because they provide predictions, but algorithms can also inform recommendations,” Albright said. “To understand the effects of algorithms, we need to understand how algorithmic predictions and recommendations independently change human decisions.” Recommendations provide a normative mapping from prediction to action: set money bail for this group but not for that; approve loan applications for that group but not for this. Might these recommendations change the cost-benefit calculation of the decision-maker?
Because algorithm-based predictions and recommendations are often introduced at the same time, they are difficult to study in a way that isolates the two effects. But Albright found a way: a natural experiment that arose when recommendations were introduced to Kentucky’s pretrial detention system, a setting where algorithms were already in use.
How recommendations changed bail
Kentucky has used algorithms to assess a defendant’s risk before trial since 1976. This assessment formalizes the relevance of specific defendant characteristics and produces a “risk score” for that defendant. Figure 2 reproduces the algorithm that went into effect in Kentucky in March 2011.
Question | Points if answer is “Yes” | Points if answer is “No” |
---|---|---|
Did a reference verify that he or she would be willing to attend court with the defendant or sign a surety bond? | 0 | 1 |
Does the defendant have a verified local address, and has the defendant lived in the area for the past 12 months? | 0 | 1 |
Does the defendant have a verified sufficient means of support? | 0 | 1 |
Is the defendant’s current charge a Class A, B, or C felony? | 1 | 0 |
Is the defendant charged with a new offense while there is a pending case? | 5 | 0 |
Does the defendant have an active warrant(s) for Failure to Appear prior to disposition? If no, does the defendant have a prior Failure to Appear for a felony or misdemeanor? | 4 | 0 |
Does the defendant have a prior Failure to Appear on his or her record for a criminal traffic violation? | 1 | 0 |
Does the defendant have prior misdemeanor convictions? | 1 | 0 |
Does the defendant have prior felony convictions? | 1 | 0 |
Does the defendant have prior violent crime convictions? | 2 | 0 |
Does the defendant have a history of drug or alcohol abuse? | 2 | 0 |
Does the defendant have a prior conviction for felony escape? | 1 | 0 |
Is the defendant currently on probation or parole from a felony conviction? | 2 | 0 |
Scores of 0–5 were categorized as “low risk,” scores of 6–13 were categorized as “moderate risk,” and scores of 14–24 were categorized as “high risk.” Judges were told whether a defendant was considered low, moderate, or high risk, but not a defendant’s exact score. The judge then decided on bail.
Then, in June 2011, the Kentucky state legislature passed a law that recommended judges set no money bail for defendants with low and moderate risk scores, an attempt to reduce the financial burden of the state’s jail population. The legislation did not provide any recommendation for defendants deemed “high risk.”
In March through June 2011, just before the law was passed, 90 percent of cases received a low or moderate risk score. However, judges allowed the defendants in only 32 percent of these cases to be released from pretrial detention with no money bail.
To estimate the effect of the new recommendation, Albright used the fact that defendants with scores on either side of the cut-off between moderate risk and high risk received different recommendations. Defendants with a score of 13, the highest moderate risk score, received a recommendation of no money bail. Defendants with a score of 14, the lowest high risk score, received no recommendation. Figure 3 shows that defendants with scores just below the cut-off received no money bail more frequently after the policy. In contrast, there was no change for high-risk defendants with scores just above the cut-off.
Overall, the legislation increased lenient decisions by 50 percent for defendants with scores just below the cut-off.
Albright interprets these findings as evidence that the recommendations changed how judges perceived the cost of a “mistake”—that is, releasing a defendant without money bail before trial who then commits a crime or fails to appear. “After the legislation, some of the blame for a lenient choice may go to the legislature, which established the recommendation, rather than the judge,” Albright said. “The algorithm’s predictions do not explain the observed changes in judges’ behavior. Rather, the new recommendation changed judges’ incentives.”
This suggests algorithmic recommendations may be particularly important when policymakers’ goals and the front-line decision-makers’ incentives are not aligned. Recommendations can be designed to meet certain policy objectives. In Kentucky, new recommendations reduced the use of money bail, but in a different context, new recommendations increased immigration detention rates. Changing algorithmic recommendations can change outcomes in economically meaningful ways, even if the underlying assessment of risk stays the same.
Lisa Camner McKay is a senior writer with the Opportunity & Inclusive Growth Institute at the Minneapolis Fed. In this role, she creates content for diverse audiences in support of the Institute’s policy and research work.