Do we pay too much to avoid minuscule risks? Yes, according to a new study by Office of Information and Regulatory Affairs (OIRA) administrator Cass Sunstein and Harvard University economist Richard Zeckhauser. The study, “Overreaction to Fearsome Risks,” published recently in the journal Environmental and Resource Economics, finds that “in the face of a low-probability fearsome risk, people often exaggerate the benefits of preventive, risk-reducing, or ameliorative measures.” Consequently, the researchers find that “in both personal life and politics, the result is damaging overreactions to risks.”
Translation: Scared people who don’t understand or care about parsing probabilities end up spending far more than is rational to avoid truly tiny risks. Worse yet, policy makers are often stampeded by frightened constituents into enacting regulations that cost far more than the benefits they offer in risk reduction.
Sunstein and Zeckhauser note, “Overreaction to risk is frequently found in the environmental realm.” As an example of overreaction they point to the Three Mile Island (TMI) nuclear meltdown in 1979. The Kemeny Commission report concluded that “the radiation doses received by the general population as a result of exposure to the radioactivity released during the accident were so small that there will be no detectable additional cases of cancer, developmental abnormalities, or genetic ill-health as a consequence of the accident at TMI.” Sunstein and Zeckhauser suggest that the country overreacted after the meltdown when construction of new nuclear plants stopped in the United States for 30 years. One consequence of the de facto nuclear power moratorium is that the coal-fired plants built instead caused many more health problems than new nuclear plants likely would have.
To illustrate how bad people are at understanding minuscule risks, the two researchers conducted an experiment with Harvard and University of Chicago law students who were asked what they would be willing to pay to avoid a one-in-a-million cancer risk. They could check off $0, $25, $50, $100, $200, $400, and $800 or more. One set of students was merely asked the question while another was given a highly emotional description of how gruesome cancer can be and then asked. The unemotional group averaged about $60 to avoid a one-in-a-million risk of cancer, while the emotional group averaged $210, nearly four times more.
Sunstein and Zeckhauser find “many people will focus, much of the time, on the emotionally perceived severity of the outcome, rather than on its likelihood.” They add, “With respect to risks of injury or harm, vivid images and concrete pictures of disaster can ‘crowd out’ the cognitive activity required to conclude and consider the fact that the probability of disaster is really small.” Activating the emotional centers in the amygdala shuts down the operation of the executive functions of the pre-frontal cortex.
Taking advantage of this flaw in reasoning, the researchers observe, “In this light, it should not be surprising that our public figures and our cause advocates often describe tragic outcomes. Rarely do we hear them quote probabilities.” In other words, politicians and activists deploy sob stories to scare the public into demanding regulations on activities they dislike. Indeed, as Sunstein and Zeckhauser explain, policymakers and activists have a bias toward action in such situations when they think they can obtain credit for responding to the risk. They want to seem like heroes to the public.
“If we look across dozens of cases, we can observe a pattern in which salient but extremely low probability risks are sometimes met with excessive responses,” write Sunstein and Zeckhauser. While not reaching any conclusions about what the government should have done or not done, the two do note that this overreaction dynamic has played out in many recent regulatory episodes, including the Love Canal contamination and evacuation incident (which led to a federal Superfund waste site clean up program), the ripening agent Alar (which was banned), shark attacks (Florida passed legislation prohibiting feeding them), terrorism (screening mail for anthrax), and terrorism (Iraq war).
So how much should someone pay to avoid a one-in-a-million risk? A “micromort” is defined as a one-in-a-million risk of dying. Let’s look at cancer. First, keep in mind that Americans have a high probability of contracting cancer. For example, an American man has a 44 percent lifetime risk of developing cancer and a 23 percent risk of dying of it. An American woman’s lifetime risk of contracting cancer is 38 percent and her risk of dying of it is 20 percent.
According to Carnegie Mellon University’s Death Risk Rankings, Americans’ average risk of dying in the next year is 8,931 micromorts. That is, out of 1,000,000 Americans alive today, 991,069 will be alive next year. With regard to cancer, Americans face an annual risk of 2,075 micromorts, which means out of 1,000,000, some 997,025 will not have died of cancer in the next year. So what does a reduction of a one-in-a-million risk of cancer amount to? Instead of 997,025 people not dying of cancer in the next year, 997,026 will not have died of cancer.
Another way to think about how much one might want to pay for avoiding a one-in-a-million risk is to make a rough calculation based on the Environmental Protection Agency’s new $9.1 million valuation of a statistical life. Since the agency uses a regulatory standard of a one-in-a-million risk, this means that its actuaries calculate that someone would be willing to spend about $9 to avoid such a risk. Now compare this amount to the 2000 study [download PDF] by Kip Viscusi and James Hamilton,Calculating Risks: The Spatial and Political Dimensions of Hazardous Waste Policy, which found that the average cost per cancer case avoided at most EPA Superfund sites was more than $100 million. And when you look at the median cost per cancer case avoided in the Viscusi and Hamilton study, Superfund looks even worse—that cost was $388 million. Assuming that Viscusi and Hamilton are right, $100 million is clearly an overreaction when even law students who were spooked by a gruesome description of cancer were willing to pay only $210 to avoid a one-in-a-million risk of cancer.
Ultimately, Sunstein and Zeckhauser suggest that institutional safeguards are the best way to insure against the harmful consequences of public overreaction. They maintain that requiring benefit-cost analysis combined with careful attention to the relevant probabilities “should provide a check on regulations that deviate substantially from objective evidence.” And who will wield benefit-cost analysis as a weapon against public overreaction? Wise bureaucrats, of course. The Office of Information and Regulatory Affairs, they note, “monitors agency action to ensure that it is directed against genuinely significant problems.” And who is in charge of OIRA? None other than the wisest of bureaucrats, Cass Sunstein.
Perhaps Sunstein will be able to prevent future regulatory overreaction, but the history as detailed in this study does not provide much confidence that he will succeed.
“If people show unusually strong reactions to low-probability catastrophes, a democratic government is likely to act accordingly,” note Sunstein and Zeckhauser. Indeed. And the tendency to overreact is exacerbated when those demanding action are not the ones paying directly for it. In addition, politicians have little incentive to quell public fears. Satirist H.L. Mencken memorably summarized this democratic dynamic: “The whole aim of practical politics is to keep the populace alarmed (and hence clamorous to be led to safety) by menacing it with an endless series of hobgoblins, all of them imaginary.” Maybe a one-in-a-million risk is not imaginary, but it's pretty damned close to it.
Science Correspondent Ronald Bailey is author of Liberation Biology: The Scientific and Moral Case for the Biotech Revolution (Prometheus Books). This column first appeared at Reason.com.