After countless scandals in recent years, the problems with America's forensics system are finally getting some national attention. In December, Sen. Patrick Leahy (D-Vt.) introduced a bill to reform the country's crime labs. In January, ProPublica and Frontline teamed up for a year-long investigation into the ways criminal autopsies are conducted across the country. In North Carolina, the state legislature is considering reforms to that state's crime lab, which was rocked by a damning 2010 investigation commissioned by the state attorney general and a follow-up report by the Raleigh News and Observer that uncovered widespread corruption, hiding of exculpatory findings, and a pro-prosecution bias among crime lab workers. All of this comes on the heels of a congressionally commissioned 2009 report from the National Academy of Sciences that found expert witnesses in many areas of forensics routinely give testimony that is not backed by good science.
So the good news is that we are starting to see some skepticism, even some outrage, about the way forensic science is used in criminal cases. The bad news is that the solutions politicians and policy makers are proposing, while better than nothing, do not really address the primary problem. That problem is perverse incentives.
To be sure, there are other problems with the forensics system. For starters, many forensic disciplines, such as hair and carpet-fiber analysis, blood spatter analysis, and especially bite mark analysis, have not been subject to rigorous scientific testing. Even fingerprint analysis is not the sure thing it was once thought to be. Many of these fields were either invented by law enforcement agencies or honed and refined by them. The fields have not been subjected to peer review, and the methods by which, for example, a carpet-fiber or ballistics analyst produces a "match" are not blind. On the contrary, the analyst often knows the details of the crime and which sample implicates the suspect. When done this way, these analyses are not science, but they are often presented in court as if they were.
Some of the policies now under consideration at the state and federal levels could help with these problems. Leahy's bill would require any crime lab that receives federal funding to be accredited and to make sure all of its analysts are certified. (It isn't clear who would do the accrediting and certifying.) The bill would also provide funding for scientific research into the various forensic fields to establish best practices and standards and to ascertain the scientific validity and accuracy of those fields. The North Carolina legislature is considering a bill that would create an advisory panel to oversee the state crime lab. The bill also would make it a felony for a crime lab worker to willfully withhold exculpatory information. The fact that such misconduct is not already considered a crime speaks volumes.
These laws would help ensure that only forensics backed by science gets into the courtroom, and they would at least cut down on blatant corruption in crime labs. But the main problem driving nearly all the recent forensics scandals is a built-in bias in favor of winning convictions. In too many jurisdictions, medical examiners report to the attorney general or to the state official who oversees law enforcement. In states like Mississippi, where for most of the last 25 years prosecutors contracted criminal autopsies out to private doctors, the incentive for medical examiners was to produce results beneficial to the prosecutor's case. If they brought back results the prosecutor did not like, they risked losing future referrals. As I've reported during the last several years, that system produced the travesty of justice that was Steven Hayne, a physician who testified in thousands of cases despite serious questions about his qualifications, credibility, and practices. But if it hadn't been Hayne, it would have been someone else.
Although states where medical examiners work directly for the state are better, incentive problems still exist. There is always pressure, blatant or implied, to deliver results the state needs to win a prosecution. That does not mean all or most or even a significant percentage of medical examiners are corrupt. But having a medical examiner and his staff ultimately report to the head of a law enforcement agency introduces subtle pressures that can influence even the most conscientious doctors.
The pressures can be even greater for serologists, ballistics experts, fingerprint analysts, and other nonmedical forensic experts, many of whom are actually sworn law enforcement officers. In a 2008 paper (PDF) published by the Reason Foundation (which publishes Reason magazine and Reason.com), Roger Koppl, director of the Institute for Forensic Science Administration at Farleigh Dickinson University, explains the myriad ways in which unintentional bias can creep into an analyst's work.
In an article that appeared in the January 2002 California Law Review, for example, a research team led by Seton Hall law professor Michael Risinger identified five stages of scientific analysis that can be corrupted by unintentional bias. They include how the analyst observes the initial data, how he records the data, how he makes calculations, and how he remembers and reinterprets his notes when preparing for trial. Koppl also cites a 2006 British study by researchers at the University of Southampton who found that the error rate of fingerprint analysts doubled when they were told the details of the case they were analyzing.
Establishing blue ribbon commissions, best standards and practices, and various oversight boards won't do much to combat cognitive bias. It is not even clear these steps will prevent outright corruption. In Mississippi, professional groups such as the National Association of Medical Examiners (NAME) received numerous complaints about Steven Hayne, going back at least to the early 1990s. They did not act until 2009, despite the fact that Hayne routinely, flagrantly, and admittedly violated NAME's guidelines. The North Carolina crime lab was accredited by the the American Society of Crime Laboratory Directors' Laboratory Accreditation Board, which failed to notice a litany of repeated violations.
The best way to begin mending the problems with the forensics system is to fix the incentives, aligning them so analysts are rewarded only for sound, scientifically supported work and punished for allowing their work to be influenced by bias, intentional or not. Koppl makes several specific recommendations in his paper for the Reason Foundation, which he and I summarized in a 2008 Slate article. The most important changes are taking state crime labs and medical examiner officers out from under the control of state law enforcement agencies and introducing a system of "rivalrous redundancy" for forensic analysis. To its credit, the Mississippi legislature is considering a bill that would have the state medical examiner report to an independent board of supervisors. Unfortunately, while the North Carolina bill changes the name of the state crime lab, it still puts the lab under the control of the State Bureau of Investigation, a police agency.
Rivalrous redundancy is in some ways a more drastic reform, but it also makes a lot of sense. The idea is to send every three, four, or five pieces of testable evidence in criminal cases to a private lab in addition to the state lab. With medical examiners, every three or four autopsies would be reviewed by a private forensic pathologist. This system would also create more work for certified forensic pathologists; part of the current problem is that there are not many independent forensic pathologists because most forensic autopsies are done by government officials, which keeps salaries low and available positions spare.
Under a system of rivalrous redundancy, state workers would not know which of their tests were being reviewed by analysts in private practice. Koppl suggests creating an independent evidence-handling office to coordinate the redundancy tests. Ideally, the tests would be rotated among several private labs. This system would eliminate the perverse incentives that plague state forensics labs. A private lab's incentive would be to discover mistakes made by the state lab. Uncovering those mistakes would enhance the private lab's reputation and prestige. State lab workers could concern themselves only with sound analysis. The incentive to please police or prosecutors would be overwhelmed by the knowledge that an independent lab would be reviewing their work. Their main incentive would be to avoid embarrassing mistakes.
All of this would of course cost money, making the idea a tougher sell in the current fiscal environment than it was when Koppl first suggested it several years ago. But as Koppl points out in his paper, wrongful convictions are also enormously expensive. Taxpayers foot the bill for the initial erroneous investigation, trial, and conviction, the cost of defending that conviction on appeal and in post-conviction proceedings (in most cases, they also foot the bill for the wrongly convicted person's defense), the cost of compensating the wrongly convicted defendant, and then the cost of a second investigation and, if the real culprit is caught, a second trial and round of appeals. Koppel estimates that the cost of just a couple of wrongful convictions would more than pay for the implementation of his proposals.
But the cost of getting the incentives right really should not be an issue. The government's primary responsibility is to protect our rights and safety. Police, prosecutors, courts, and jails are all legitimate functions stemming from that responsibility. But so is ensuring that the people the government puts in prison are actually guilty of the crimes for which they are being punished.