Thursday, December 07, 2006

Community Corrections and “Minority Report”

Do you remember the 2002 Tom Cruise movie, Minority Report? You remember the plot -- In the future, criminals are caught before the crimes they commit – storm-trooper style. Civil Liberties, What stinkin’ civil liberties! The rap on risk assessment is that it often has this sort of science-fiction quality, and is creepy.

However, the science of risk assessment stretches back to at least the 1950’s, and is a respectable field that has made great strides in recent years. Advocates, (I am one -- with qualifications, of course), emphasize that risk is being assessed every day in informal and often prejudicial ways, and objective risk assessment has a far better track record than subjective assessments. Most respectable community corrections systems and prison systems assess risk using some formal instrument these days, to make decisions about individual placements based on profiles and actuarial calculations.

My topic today is community corrections and risk assessment. Almost all public safety policy makers are concerned about recidivism. But they are particularly concerned about violent recidivism. Everyone understands that addicts have a hard time kicking a drug/alcohol/etc. habit. And many recidivists are a public nuisance, harm themselves, contaminate neighborhoods, scare young children (and many adults) with aberrant behavior etc. These are the folks we are mad at, and we are justified to be frustrated with them, in my view. Heck, we are mad at family members and friends with many of the same behaviors and problems, though maybe not as extreme.

But what most policy makers are really scared of is the violent recidivist – you know the type that is released from a halfway house only to commit a murder the same day. I don’t know how many times that I have been asked “But are they dangerous?” It is a legitimate question, and indicates that people are making the distinction between people we are “mad at” and those we are “scared of.”

The current issue of Criminology and Public Policy (Nov, 2006) has an article entitled “Violence Risk Screening in Community Corrections” by Davies and Dedel. They survey the instruments available (LSI-R, PCL-R-2, VRAG, HCR-20; mostly state-of-the-art as far as I know), and conclude that generic instruments will not do. I made some reference to these way back in August 2006 sex offenders (as an aside, Mike has sure written a lot since then, hasn’t he! I’m impressed at his feat, if not my own.) Portland, Oregon (and the authors) have developed their own instrument, normed on their own population, and they are right in explaining that that is a big deal. Taking an instrument normed on Wisconsin probationers and pretending it fits an East Coast major metropolitan area is problematic.

The response by Kathleen Auerhahn is also well worth a read. She makes several lucid points. 1. Danger is a complicated concept. 2. The risk of over-prediction is high (contributing to prison growth), and in my limited experience you have to assume it with these models. 3. The line between hindsight and foresight (see Tom Cruise above) is a slippery slope – the statistical model is based on the past and suffer from shrinkage – the population to which the model applies is never a perfect fit to the past population. Very reasonable reservations, which to my mind do not invalidate Davies and Dedel’s efforts.

Now on to Minority Report II.
This article from the Philadelphia Inquirer reports on Richard Berk, an iconoclast and a smart guy if ever there was one.

“Now, using fresh data from the Philadelphia probation department, Berk and three colleagues have built an innovative model for predicting which troublemakers already in the system are most likely to kill or attempt a killing.” This is a hard thing to do. It suffers from the Low Base Rate problem, that murders are rare and therefore there isn’t much data on which to base your predictions. (only about one in 100 Philly probationers will commit homicide – that number seems high doesn’t it? Can that be right?)

The report suggests that initial research suggests the software-based system can make it 40 times more likely for caseworkers to accurately predict future lethality than they can using current practices. This is what policy makers are screaming for. Berk’s model involves plugging 30 to 40 variables into a computerized checklist, which in turn produces a score associated with future lethality.

The article notes another important point and one that Mike has stressed: resources. “The central public policy question in all of this is a resource allocation problem. With not enough resources to go around, overloaded case workers have to cull their cases to find the ones in most urgent need of attention - the so-called true positives, as epidemiologists say.” Berk explains his thinking quite cogently “If we have 100 probationers I can accurately find the one murderer who statistically will be in that group if I devote resources to all 100 as if they are murderers. The problem is that for that one murderer who is a 'true positive,' I have 99 false positives. We all would agree that's not a good use of resources. Now suppose I can identify the 10 at highest risk. For that one true positive I now have nine false positives," Berk said, "and that may be something we choose to live with."

This is chock full of important moral and ethical questions, as well as practical ones. In a democracy, we are called upon to weigh these issues. We need to educate ourselves in preparation for these risk-based decisions, which are going to confront us more frequently in the future.

There, Mike, is a technocorrections issue for you - sort of!

No comments: