Ben Barlyn passed along this story from yesterday's NY Times Mag, saying its topic of the growing role of neuroscience in law and particularly criminal justice seemed up my alley. He was right. It's a long piece, but more than worth your time if you want as good an overview of where we're heading with this slice of technocorrections right now. Here are some good excerpts, but, believe me, these just touch the surface of this important article:
But Martell told me that it’s in death-penalty litigation that neuroscience evidence is having its most revolutionary effect. “Some sort of organic brain defense has become de rigueur in any sort of capital defense,” he said. Lawyers routinely order scans of convicted defendants’ brains and argue that a neurological impairment prevented them from controlling themselves. The prosecution counters that the evidence shouldn’t be admitted, but under the relaxed standards for mitigating evidence during capital sentencing, it usually is. Indeed, a Florida court has held that the failure to admit neuroscience evidence during capital sentencing is grounds for a reversal. Martell remains skeptical about the worth of the brain scans, but he observes that they’ve “revolutionized the law.” . . .
Proponents of neurolaw say that neuroscientific evidence will have a large impact not only on questions of guilt and punishment but also on the detection of lies and hidden bias, and on the prediction of future criminal behavior. At the same time, skeptics fear that the use of brain-scanning technology as a kind of super mind-reading device will threaten our privacy and mental freedom, leading some to call for the legal system to respond with a new concept of “cognitive liberty.” . . .
In the meantime, Jones is turning Vanderbilt into a kind of Los Alamos for neurolaw. The university has just opened a $27 million neuroimaging center and has poached leading neuroscientists from around the world; soon, Jones hopes to enroll students in the nation’s first program in law and neuroscience. “It’s breathlessly exciting,” he says. “This is the new frontier in law and science — we’re peering into the black box to see how the brain is actually working, that hidden place in the dark quiet, where we have our private thoughts and private reactions — and the law will inevitably have to decide how to deal with this new technology.” . . .
But if the prefrontal cortex does turn out to be critical for selecting among punishments, Jones added, it could be highly relevant for lawyers selecting a jury. For example, he suggested, lawyers might even select jurors for different cases based on their different brain-activity patterns. In a complex insider-trading case, for example, perhaps the defense would “like to have a juror making decisions on maximum deliberation and minimum emotion”; in a government entrapment case, emotional reactions might be more appropriate. . . .
“This is a potentially very serious legal implication,” Jones broke in, since the technology allows us to tell what people are thinking about even if they deny it. He pointed to a series of practical applications. Because subconscious memories of faces and places may be more reliable than conscious memories, witness lineups could be transformed. A child who claimed to have been victimized by a stranger, moreover, could be shown pictures of the faces of suspects to see which one lighted up the face-recognition area in ways suggesting familiarity. . . .
Jones and Marois talked excitedly about the implications of their experiments for the legal system. If they discovered a significant gap between people’s hard-wired sense of how severely certain crimes should be punished and the actual punishments assigned by law, federal sentencing guidelines might be revised, on the principle that the law shouldn’t diverge too far from deeply shared beliefs. Experiments might help to develop a deeper understanding of the criminal brain, or of the typical brain predisposed to criminal activity. . . .
“To a neuroscientist, you are your brain; nothing causes your behavior other than the operations of your brain,” Greene says. “If that’s right, it radically changes the way we think about the law. The official line in the law is all that matters is whether you’re rational, but you can have someone who is totally rational but whose strings are being pulled by something beyond his control.” In other words, even someone who has the illusion of making a free and rational choice between soup and salad may be deluding himself, since the choice of salad over soup is ultimately predestined by forces hard-wired in his brain. Greene insists that this insight means that the criminal-justice system should abandon the idea of retribution — the idea that bad people should be punished because they have freely chosen to act immorally — which has been the focus of American criminal law since the 1970s, when rehabilitation went out of fashion. Instead, Greene says, the law should focus on deterring future harms. In some cases, he supposes, this might mean lighter punishments. “If it’s really true that we don’t get any prevention bang from our punishment buck when we punish that person, then it’s not worth punishing that person,” he says. (On the other hand, Carter Snead, the Notre Dame scholar, maintains that capital defendants who are not considered fully blameworthy under current rules could be executed more readily under a system that focused on preventing future harms.) . . .
Still, Morse concedes that there are circumstances under which new discoveries from neuroscience could challenge the legal system at its core. “Suppose neuroscience could reveal that reason actually plays no role in determining human behavior,” he suggests tantalizingly. “Suppose I could show you that your intentions and your reasons for your actions are post hoc rationalizations that somehow your brain generates to explain to you what your brain has already done” without your conscious participation. If neuroscience could reveal us to be automatons in this respect, Morse is prepared to agree with Greene and Cohen that criminal law would have to abandon its current ideas about responsibility and seek other ways of protecting society. . . .
Judy Illes, director of Neuroethics at the Stanford Center for Biomedical Ethics, says, “I would predict that within five years, we will have technology that is sufficiently reliable at getting at the binary question of whether someone is lying that it may be utilized in certain legal settings.” . . .
Even if witnesses don’t have their brains scanned, neuroscience may lead judges and jurors to conclude that certain kinds of memories are more reliable than others because of the area of the brain in which they are processed. Further into the future, and closer to science fiction, lies the possibility of memory downloading. “One could even, just barely, imagine a technology that might be able to ‘read out’ the witness’s memories, intercepted as neuronal firings, and translate it directly into voice, text or the equivalent of a movie,” Hank Greely writes. . . .
In the future, neuroscience could also revolutionize the way jurors are selected. Steven Laken, the president of Cephos, says that jury consultants might seek to put prospective jurors in f.M.R.I.’s. “You could give videotapes of the lawyers and witnesses to people when they’re in the magnet and see what parts of their brains light up,” he says. A situation like this would raise vexing questions about jurors’ prejudices — and what makes for a fair trial. Recent experiments have suggested that people who believe themselves to be free of bias may harbor plenty of it all the same. . . .
Neuroscience, it seems, points two ways: it can absolve individuals of responsibility for acts they’ve committed, but it can also place individuals in jeopardy for acts they haven’t committed — but might someday. “This opens up a Pandora’s box in civilized society that I’m willing to fight against,” says Helen S. Mayberg, a professor of psychiatry, behavioral sciences and neurology at Emory University School of Medicine, who has testified against the admission of neuroscience evidence in criminal trials. “If you believe at the time of trial that the picture informs us about what they were like at the time of the crime, then the picture moves forward. You need to be prepared for: ‘This spot is a sign of future dangerousness,’ when someone is up for parole. They have a scan, the spot is there, so they don’t get out. It’s carved in your brain.” . . .
The idea of holding people accountable for their predispositions rather than their actions poses a challenge to one of the central principles of Anglo-American jurisprudence: namely, that people are responsible for their behavior, not their proclivities — for what they do, not what they think. “We’re going to have to make a decision about the skull as a privacy domain,” Wolpe says. Indeed, Wolpe serves on the board of an organization called the Center for Cognitive Liberty and Ethics, a group of neuroscientists, legal scholars and privacy advocates “dedicated to protecting and advancing freedom of thought in the modern world of accelerating neurotechnologies.” . . .
Some neuroscientists believe that T.M.S. may be used in the future to enforce a vision of therapeutic justice, based on the idea that defective brains can be cured. “Maybe somewhere down the line, a badly damaged brain would be viewed as something that can heal, like a broken leg that needs to be repaired,” the neurobiologist Robert Sapolsky says, although he acknowledges that defining what counts as a normal brain is politically and scientifically fraught. Indeed, efforts to identify normal and abnormal brains have been responsible for some of the darkest movements in the history of science and technology, from phrenology to eugenics. “How far are we willing to go to use neurotechnology to change people’s brains we consider disordered?” Wolpe asks. “We might find a part of the brain that seems to be malfunctioning, like a discrete part of the brain operative in violent or sexually predatory behavior, and then turn off or inhibit that behavior using transcranial magnetic stimulation.” Even behaviors in the normal range might be fine-tuned by T.M.S.: jurors, for example, could be made more emotional or more deliberative with magnetic interventions. Mark George, an adviser to the Cephos company and also director of the Medical University of South Carolina Center for Advanced Imaging Research, has submitted a patent application for a T.M.S. procedure that supposedly suppresses the area of the brain involved in lying and makes a person less capable of not telling the truth.
As the new technologies proliferate, even the neurolaw experts themselves have only begun to think about the questions that lie ahead. Can the police get a search warrant for someone’s brain? Should the Fourth Amendment protect our minds in the same way that it protects our houses? Can courts order tests of suspects’ memories to determine whether they are gang members or police informers, or would this violate the Fifth Amendment’s ban on compulsory self-incrimination? Would punishing people for their thoughts rather than for their actions violate the Eighth Amendment’s ban on cruel and unusual punishment? However astonishing our machines may become, they cannot tell us how to answer these perplexing questions. . . .
Train's pulling out of the station, ladies and gentlemen. Are any corrections sentencing policy types on board??