I've written before how it's "unexpected" that AI (and body cams), intended to be fair and impartial, suddenly are "racist" and "sexist". Now it's the turn of algorithms intended to make justice more impartial. Per wired, the subheader: "A 2011 Kentucky law requires judges to consult an algorithm when deciding whether defendants must post cash bail. More whites were allowed to go home, but not blacks."

Further in:

Kentucky lawmakers thought requiring that judges consult an algorithm when deciding whether to hold a defendant in jail before trial would make the state’s justice system cheaper and fairer by setting more people free. That’s not how it turned out.

Before the 2011 law took effect, there was little difference between the proportion of black and white defendants granted release to await trial at home without cash bail. After being mandated to consider a score predicting the risk a person would reoffend or skip court, the state’s judges began offering no-bail release to white defendants much more often than to blacks. The proportion of black defendants granted release without bail increased only slightly, to a little over 25 percent. The rate for whites jumped to more than 35 percent. Kentucky has changed its algorithm twice since 2011, but available data shows the gap remained roughly constant through early 2016.

The Kentucky experience, detailed in a study published earlier this year, is timely. Many states and counties now calculate “risk scores” for criminal defendants that estimate the chance a person will reoffend before trial or skip court; some use similar tools in sentencing. They are supposed to help judges make fairer decisions and cut the number of people in jail or prison, sometimes as part of eliminating cash bail. Since 2017, Kentucky has released some defendants scored as low-risk based purely on an algorithm’s say-so, without a judge being involved.

Of course, it's blamed on people not interpreting the numbers wrong, or ignoring them, or.... whatever. But when they run a blind-ish trial they find something interesting.

An experiment that tested how judges react to hypothetical risk scores for determining sentences also found evidence that algorithmic advice can cause unexpected problems. The study, which is pending publication, asked 340 judges to decide sentences for made-up drug cases. Half of the judges saw “cases” with risk scores estimating the defendant had a medium to high risk of rearrest and half did not.

When they weren’t given a risk score, judges were tougher on more-affluent defendants than poor ones. Adding the algorithm reversed the trend: Richer defendants had a 44 percent chance of doing time but poorer ones a 61 percent chance. The pattern held after controlling for the sex, race, political orientation, and jurisdiction of the judge.

“I thought that risk assessment probably wouldn’t have much effect on sentencing,” says Jennifer Skeem, a UC Berkeley professor who worked on the study with colleagues from UC Irvine and the University of Virginia. “Now we understand that risk assessment can interact with judges to make disparities worse.”

That's right. When risk scores based on correlation of past history with likelihood to reoffend are factored in, those scores were apparently higher for the poor, etc. (keep in mind the correlation between poverty and IQ, on average, and, in turn, IQ and crime), and so when factoring the scores in, the judges, instead of treating the wealthy and poor the same for any given instance as they had in the past, were more likely to follow the risk assessment.

Of course, the solution is to train people to incorporate a fudge factor to counteract the numbers based on standards.


It's almost like the article incorporated into Pournelle's A Step Further Out where he was working with datasets to determine likelihood of graduating from college based on incoming schools, IQ, grades, etc.... and the correlations they discovered were not fitting the narrative.

Funny how when you measure against a standard, differences in culture, etc. manifest themselves across groups.