I recently reviewed Jerome Groopman's How Doctors Think which took the position that, yes, doctors make predictable and well-studied cognitive errors in their diagnoses but that doesn’t mean that we should shift to statistical modeling that would take their discretion away. The relevancy of the book’s discussion to corrections sentencing should be easily apparent, and I recommended that someone do a How Judges Think book. (If someone has some cash to put up, I might be able to find you the author.)
In that spirit, I was very interested in two new books, Ian Ayres’ Super Crunchers: Why Thinking-By-Numbers Is the New Way to Be Smart and Gerd Gigerenzer’s (really) Gut Feelings: The Intelligence of the Unconscious. That these books are diametrically opposed in their advocacies should be apparent in the alternative title that Ayres says he rejected: The End of Intuition, or IOW, gut feelings. So literally, if you read these two books together, you’re getting the prosecution and defense of both intuition and statistical modeling. If you think only an idiot would read these two books together, then read this.
Ayres is an economist, more charming than most but still over-enamoured of numbers at the expense of reality, as I’ll discuss. Basically, Super Crunchers is a discussion of how the world of regression, Bayes' theorem, and multi-variate modeling has been taken out of social sciences and applied to everything from predicting great wines to great mates, from Googling to social policy piloting, from analyzing baseball to analyzing consumer preferences and meeting them in advance. It’s a very nice overview of how regression works and the wide applications to which it’s now applied, including assaults on the professions like medicine (paging Dr. Groopman), education, and corrections sentencing, especially parole decisions and locking sex offenders away forever. Don’t plan on the old days, you youngsters thinking about getting the big bucks associated with deference to the superior knowledge of highly-trained and credentialed professionals. The computer knows much better than they do. (Along the way he also describes his run-ins with the incredible and strange John Lott who actually epitomizes why everyone on God’s green earth should fear the Super Crunchers, even if Ayres asserts they can be found out . . . uh, Ian, I’m not betting on it always happening.)
Here’s Ayres’ essential critique of the “experts” and their “discretion” threatened by statistical modeling and prediction: “In context after context, decision makers who wave off the statistical predictions tend to make poorer decisions. . . . [E]xperts are overconfident in their ability to beat the system. We tend to think that the restraints are useful for the other guy but not for us. . . ." And, “If we care solely about getting the best decisions overall, there are many contexts where we need to relegate experts to mere supporting roles in the decision-making process. . . . At a minimum, however, we should keep track of how experts fare when they wave off the suggestons of the formulas. . . .” Sentencing guidelines or info systems, anyone?
Gerd (I’ll call him Gerd) is not impressed. Instead, he invites his readers on “a journey into a largely unknown land of rationality, populated by people just like us, who are partially ignorant, whose time is limited and whose future is uncertain. This land is not one many scholars [read Ayres] write about. They prefer to describe a land where the sun of enlightenment shines down in beams of logic and probability, whereas the land we are visiting is shrouded in a mist of dim uncertainty. In my story, what seem to be “limitations” of the mind can actually be its strengths. How the mind adapts and economizes by relying on the unconscious, on rules of thumb, and on evolved capacities is what [this book] is all about. The laws in the real world are puzzingly different from those in the logical, idealized [read economics] world. More information, even more thinking, is not always better, and less can be more.”
Like Ayres, only not, he proceeds to give example after example of how the statistics or logic of a situation called for X, the decisionmaker ended up doing Y, and Y came out right. He shows well how “in an uncertain world, a complex strategy can fail exactly because it explains too much in hindsight. Only part of the information is valuable for the future, and the art of intuition is to focus on that part and ignore the rest” (that is, it takes an experienced expert). He likes quick heuristics, like “fast and frugal” decision trees that limit the analysis to a few answers, that sound like “does he have prior convictions?” and “how severe is the current offense?” Okay, he didn’t mention those, but that’s what you’ll think of. And more to the corr sent point, he spends significant time talking about the quick and dirty factors behind the bail decisions of English and Welsh magistrates (despite their claims to considering all factors), which has major relevancy for those of us trying to figure out what factors judges consider at sentencing.
It may sound like these two would be fun to watch in a handball court, but actually they have more in common than we might think. It’s just a difference in emphasis. In fact, the stat modelling is good when the future is like the past and when you don’t need to take into account outliers. In corr sent, we in effect do this with our plea bargaining system. It’s a kind of 80/20 rule, where you have to decide if the case, patient, student at hand is one of the many or one of the few. Sentencing guidelines advocates focus on the 80, judicial discretion advocates, the 20. Neither is wrong per se. It’s just the problem of getting the applications right, and each can show cases of how the other screwed up. (Ayres spends a lot of pages talking about Bob James’ application of stats to baseball to get better views than oldtimers and scouts, but somehow fails to note how, when James went to work for the Boston Red Sox and recommended “bullpen by committee,” Boston tried it, saw it blow up, and went back to a single closer the next year.)
Another problem is what all of us who’ve ever done prison projections, which are based on super crunching, know—we predict the past better than the future and unchanging worlds better than changing ones. If the historical flows that provided the stats for our regressions should shift, decline in magnitude, whatever (parole agents see beds available after new prison construction so they start revoking at higher rates than in the projection model and the prisons fill up three years faster than the model predicts), then you find your self in the same bind as the states that have overbuilt and underbuilt bedspace right now. Both authors recognize this. Ian thinks it’s rare, Gerd thinks it’s frequent. They’re both right, just depends on when and where.
They actually agree on one surprising point. It’s not an either-or, either super crunching or gut feeling. It’s a combo, with the stats being informed by the insights of experts and practitioners who’ve learned from exercise of their discretion. And both feel that, rather than providing experts numbers and letting them decide, it’s best to let the experts gauge and tune the models. In our corr sent world, it means sentencing info systems or guidelines generating statistics that receive constant feedback from those using them and seeing how their recommendations work in actual practice. Having always pushed for greater info for the judges at sentencing and seen them pretty much reject or ignore that info in all 3 states I’ve worked sentencing in, I’ve come to the same conclusion. Sentencing info systems hoping to just build the data and they will come will be sadly mistaken, for the reasons of overconfidence and ego that Ayres outlines. Better to set up the stat modeling and make the judges deal with the numbers and show when they fail. Although neither author deals specifically with these issues, I’m sure they would agree and that we should follow. After all, they’re the experts.