High school economics wunderkind Evan Soltas put an interesting chart (see below) on his blog yesterday that The Atlantic's Matt O'Brien pointed out to me, asking for a libertarian response. I lightly and respectfully undertake an attempt below.
Evan writes that he pulled the chart together from FRED and NERB data showing that recessions have slowed down considerably since the 1850s. This is not incredibly new information—it is widely understood that since the 1930s that volatility is much more subdued than in the 19th century—though I had not seen the information presented like this before, to Evan's credit.
(The chart was an embedded interactive file so I had to take a screengrab to show it here, but check out the original on Evan's blog here.)
By itself this chart doesn't tell you much until you start putting pieces of information on it to extrapolate as to the cause. For instance, Evan argues:
In libertarian circles, the late 19th-century is seen as the pinnacle of growth and of laissez-faire and treated with according reverence. That story is not really true. Statistics which show unprecedented growth during the Gilded Age, I worry, are either imprecise, inaccurate, or worse, gamed according to their start- and end-points... It would be very possible that [GDP] grew quickly in between the frequent recessions, but the data do not support such a case: from 1800 to 1840, real GDP per capita grew at 0.4 percent annually; from 1840 to 1880, 1.44; from 1880 to 1920, 1.78; from 1920 to 1960, 1.68; from 1960 to 1978, 2.47.
I don't want to put words in Evan's mouth, but it appears the underlying assumption is that it was the creation of the Federal Reserve, victory of new Keynesian economic policy, Glass-Steagall, deposit insurance, and a less laissez-faire system that enabled the faster growth in the 20th century. Furthermore, I take an assumption that our present state of fewer recessions and GDP average growth of 2 percent over a multi-decade period is preferable. (I'm happy to be corrected if I am in error on these judgments.)
While I have not dug into this specific data myself for any extended period of time (and it appears there was a detailed attempt here anyway), there are a few things to consider in performing such economic analysis. To start, recessions are not ubiquitous events. They are not created equal. Their causes matter more than their numbers. For example, we might prefer five recessions that are six-to-eight months long scattered between 2002 and 2012, all caused by over investment in tech firms like we saw in the wake of the dot-com bubble's burst, to the boom from 2002 to 2007, followed by the 19 month recession, and then three-plus years of tepid economic growth.
In the former scenario we are less likely to see recessions substantially impact household debt or long-term consumption trends. Spending would tighten up for a few months, balance sheets would be cleansed a bit, but the level of toxicity would not be so dramatic as to cause the losses we've experienced in the wake of our most recent bubble's bursting.
In the later scenario we have only 19 months of recession to deal with as opposed to as many as 40 months of recession to wrestle with. And we achieve much higher living standards for at least half of the time period. However, there is no inherent, objective measure that suggests this is better than the alternative scenario that I set up.
Nor is my alternative objectively better either. If the causes of those frequent recessions were bank runs that caused liquidity tightening and wide-spread bankruptcies as businesses failed to get access to credit to finance their payrolls, then we might not see quick bounce backs and the effects of those regular recessions could bleed into each other creating the environment Evan's data suggests for the middle part of the 19th century (which did include a devastating Civil War, by the way).
All of this merely points out that the frequency of recessions is a relatively unimportant data point. It is the sources of such recessions.
So to the second assumption, on the causes of the decreased volatility. This is a complex question. I pointed out the declining savings rate of the 1980s, 1990s, and 2000s earlier today on this blog and how this contributed to the economic boom years following the end of Stagflation and the Reagan recession. Perhaps if there were no technical advances to give us credit cards or if we were a less trusting society we would have had slower economic growth. Would this change have then discredited Reaganomics or influenced the way we view tax rate impact on GDP growth? Probably.
The point here is to merely suggest that GDP growth rates as higher in the 20th century on average relative to the 19th century don't suggest much about the realities of the Gilded Age. What would the 20th century have been without the technical evolutions that gave us cars, planes, global telecommunication, and computing power? Back in 2008, this country would have given up a lot to get a 1.5 percent GDP growth rate.
Evan concludes that his chart and argument shows "a very different picture of America, when you think about it. Frequent recessions, slow growth, little improvement in living standards, profound inequality -- all of this against what we have (had?) in the postwar era: fewer recessions, faster growth, faster improvement of living standards, less inequality."
On its face this is an efficiency argument for the central bank era vs. a supposed lassiez-faire era. The problem is that this assumes the desirability of a recession rate, speed of economic growth, and level of equality all divorced from their causes. That's not a leap of logic we should lightly undertake.