One of the very best and most important books of this decade (which means it’s been pretty much ignored)—Philip Tetlock’s Expert Political Judgment—gets a rerun review here at The Situationist. I reviewed deep in the archives a long time ago so I can’t find it quickly now, but this review does well at getting the basic ideas across. Here are just a couple of grafs so you’ll be interested in reading the whole thing:
The results of his painstaking research are complex, nuanced, and contingent, but the bottom line is clear enough. Tetlock’s data “plunk human forecasters into an unflattering spot along the performance continuum, distressingly closer to the chimp than to the formal statistical models.” In fact, “it is impossible to find any domain in which humans clearly outperformed crude extrapolation algorithms, less still sophisticated statistical ones” (emphasis in original). Worst of all, those experts with the poorest track records are the most likely to show up on TV screens and blogsites everywhere.
To cope with the mind-boggling complexity involved in processing over 80,000 expert predictions and distilling the concomitants of accuracy, Tetlock boils things down to a single dimension of cognitive style that captures most of the good judgment he could find. Drawing on an essay by Isaiah Berlin, Tetlock distinguishes between “foxes,” who “‘know many little things,’ draw from an eclectic array of traditions, and accept ambiguity and contradiction as inevitable features of life” and “hedgehogs,” who “‘know one big thing,’ toil devotedly within one tradition, and reach for formulaic solutions to ill-defined problems.”
Much of the book details the ways in which foxes outperform hedgehogs as prognosticators and Bayesian updaters. Foxes scored higher than others on measures of calibration; their subjective probability estimates were better correlated with the objective frequencies of the events they were predicting, especially in the short term. The worst judges were hedgehog extremists who made long-term predictions in their own areas of expertise. They correctly anticipated war in the former Yugoslavia, but they also predicted several wars that did not happen. Even more than others, they frequently overestimated the likelihood of drastic changes from the status quo.
When unexpected outcomes occurred, hedgehogs were less likely than foxes to revise their beliefs in light of new realities. They were also more likely to display hindsight bias, believing that they “knew it all along,” even when they did not, and they were less charitable toward their competition, exaggerating the extent to which rivals were mistaken. The only advantage hedgehogs enjoyed–other than greater media exposure–was a tendency to swing for the home-run fences. They were almost twice as likely as foxes to declare certain events as either inevitable or impossible, and when they did so they were usually correct.
The implications of all this for judging, for guidelines and evidence-based practice, for the certainty with which practitioners assert their wisdom in deciding what sentences are appropriate are enormous. Tetlock has unknowingly issued a call for data-driven sentencing and for constant and meaningful transparency of judicial and prosecutorial reasoning [sic] when making their decisions. I haven’t read a book with more relevance for corrections sentencing in many years. Just think how it might have been if he’d meant to write it. Good on The Situationist for keeping it in our attention.