The Authority of Quantitative Research and its Influence on Austerity Economics

Earlier this week, the findings of a new paper by a hitherto unknown graduate student have created a flurry among mainstream economists. The study published by Thomas Herndon and two of his professors, Robert Pollin and Michael Ash, essentially pointed out that the pro-austerity movement had been relying on shoddy economics.

“At first, I didn’t believe him. I thought, ‘OK he’s a student, he’s got to be wrong. These are eminent economists and he’s a graduate student,'” [UMass Amherst professor Robert] Pollin said. “So we pushed him and pushed him and pushed him, and after about a month of pushing him I said, ‘Goddamn it, he’s right.'” (Kevin Roose)

This fiasco is relevant for a number of reasons, firstly because of its implications for the authority of quantitative research and its relationship to political power, and what happens when the methods utilized by academics in their research are not subject to rigorous critique. Secondly it reminds us that policy-makers often appropriate such research, for reasons ranging from careerist or personal interest, or motivations tied to political ideology. It highlights and reminds us of the ethical responsibility that academia has in reflecting on the body of work it produces, and of the necessity of meticulous peer review.

Toothpaste for Dinner

Paul Krugman points out that the intellectual edifice of recent austerity economics rests largely on two academic papers which were seized on by US and European policy-makers without serious scrutiny, and were used as justifications for austerity campaigns. Austerity here is meant in a precise sense, namely as a strategy of restoring long-term economic growth by reducing levels of public debt, eliminating budgetary deficits and cutting government spending. The argument made by austerians, and which shall be critically examined below, is that increased spending and governmental indebtedness, past a certain threshold level results in negative growth or economic contraction.

One of the aforementioned studies was by Alesina/Ardagna which looked at the macroeconomic effects of austerity and gave academic backup to the notion of expansionary austerity. The current debate about fiscal stimulus and whether it can kick-start growth is centrally related to what happens when interest rates are very close to zero (see, for example Japan in the 1990s). This is in contrast to scenarios where a central bank adjusts interest rates accordingly, for example by raising them in order to counter the effects of a fiscal expansion. However the paper doesn’tdistinguish between episodes where monetary policy was a tool available to policy-makers, and those where it wasn’t (Krugman 2010). Secondly, they use an awkward statistical method to locate episodes where fiscal expansions took place, by identifying large changes in states’ structural balances. For example, the one clear example of Keynesian stimulus in a zero-interest-rate environment (Japan in 1995) is entirely omitted, as is the large fiscal contraction in 1997, while the study instead chooses to focus on spurious cases of austerity in other years. As Krugman notes elsewhere the IMF’s 2010 World Economic Outlook largely discredits various studies dealing with the economics of fiscal austerity, pointing to their weak methodology in identifying actual changes in fiscal policy. He emphasizes that “the study shows that fiscal contractions have normally been accompanied by both lower policy interest rates and currency depreciation, both of which help cushion the negative effects. It seems clear that when you’re both in a liquidity trap and facing a global slump, the negative effects of austerity are likely to be much worse.” (ibid).

The other study is the one at the center of this week’s fiasco: a paper by Reinhart/Rogoff on the negative effects of debt on growth. The main findings of the paper is that “median growth rates for countries with public debt over 90 percent of GDP are roughly one percent lower than otherwise; average (mean) growth rates are several percent lower.” (Reinhart and Rogoff, 2010). Once a country’s public debt to GDP ratio reaches a level of 90%, growth rates begin to decline, and beyond that threshold there is only an abyss. More broadly, they conclude that growth slows as debt rises. In April 2013 however, a group of researchers at the University of Massachusetts Amherst released a study critiquing these findings. The thrust of their critique is that Reinhart and Rogoff omitted some crucial data, and made some very odd choices about which data to exclude (for example by selectively excluding years where high debt went in tandem with average growth), and unreliable decisions about how to properly weight the data that remained (Roberts 2013; Krugman 2013). They found that the research suffered from a “selective exclusion of available data, and unconventional weighting of summary statistics [which led] to serious errors that inaccurately represent the relationship between public debt and GDP growth among 20 advanced economies in the post-war period.” (Herndon et. al 2013). In addition, there were also basic coding errors in their Excel spreadsheets which resulted in high-debt and average-growth countries being excluded. Kevin Roose puts it thus:

“What Herndon had discovered was that by making a sloppy computing error, Reinhart and Rogoff had forgotten to include a critical piece of data about countries with high debt-to-GDP ratios that would have affected their overall calculations. They had also excluded data from Canada, New Zealand, and Australia — all countries that experienced solid growth during periods of high debt and would thus undercut their thesis that high debt forestalls growth. Herndon was stunned. As a graduate student, he’d just found serious problems in a famous economic study — the academic equivalent of a D-league basketball player dunking on LeBron James.”

When properly calculated, they find that “the average real GDP growth rate for countries carrying a public-debt-to-GDP ratio of over 90% is actually 2.2 percent, not -0.1 percent” as concluded by R&R.

For example, their data on debt-to-GDP had some gaps, so one particular year in New Zealand, 1951, where they recorded GDP falling by 7.6% is given enormous weight, in effect the same as 19 years of Greece. It is weighted more because four previous post-war years were excluded, because the average was produced by averaging the figures for each country, then averaging those. “A single year of bad growth in one high-debt country counts as much as multiple years of good growth in another high-debt country” (Krugman, 2013). As James Mackintosh points out, this is problematic because in that case, it does not take into account the tripling of the price of wool (made up half of NZ’s exports at the time) in 1949-50. According to Brian Easton’s GDP series, GDP fell 5.5% in the fiscal year 1951-52, after rising 15.5% the year before, something that R&R do not account for. Krugman suggests that what they are showing is reverse causation, presenting their study “as if debt was necessarily causing slow growth rather than the other way around.” (Krugman 2013).It could be the case that countries that have high debt-to-GDP ratios could be in such a situation due to their serious economic problems. Using his own data of G7 countries, he agrees that there appears to be a correlation between high debt and slow growth (see Figure 1). Even the UK, which surpassed a debt-to-GDP ratio of 200% in the 1950s didn’t suffer too much in terms of its growth. The exceptions are Italy and Japan which “ran up high debts as a consequence of their growth slowdowns, not the other way around.” (ibid).

Paul Krugman - Growth Rate vs. Debt Ratio
Paul Krugman – Growth Rate vs. Debt Ratio

In a response on the Wall Street Journal online, Reinhart and Rogoff admit these Excel errors, but stick to their methodological choices of excluding certain years (because of gaps in official data) and their method of averaging. They fundamentally still support their broad conclusion that growth slows as debt rises, just not as dramatically as they stated (see see James Mackintosh’s blog post “Excel, New Zealand and Reinhart & Rogoff” for more information on this).

Paul Ryan, the Republican chair of the House Budget committee pushed for rapid fiscal tightening citing R&R as “conclusive empirical evidence that total debt exceeding 90 per cent of the economy has a significant negative effect on economic growth” (Roberts 2013). Methodological flaws and errors of omission are a big deal in this case because these studies―in this case carried out by two eminent, mainstream economists from the most prestigious, if conservative institutions―which have been used by politicians since the onset of the global financial crisis to justify austerity measures “have been found to distort, mislead and make basic errors in their stats” (ibid.). Such an occurrence is very disconcerting and more so the fact that policy-makers appear to unscrupulously cherry-pick economic research (which has been exposed as academically weak) simply because it fits with the dominant political ideology. Roberts concludes:

“But whatever view you take: that austerity works or does not work; or the Keynesian alternatives work or don’t work in getting capitalism back on its feet, the news that mainstream academics are fast and loose with their number crunching in order to reach pre-conceived conclusions is not so surprising.  It’s part of what Marx called ‘vulgar economics’.  It has only been revealed this time because of the battle over pro-capitalist economic policy between the Austerians and Keynesians.”

One thought on “The Authority of Quantitative Research and its Influence on Austerity Economics

  1. Great article, thank you for giving more details than a few newspapers out there…

    I think the Reinhart-Rogoff replication scandal has not only consequences for how we think about debt. It might well change the way we think about replication and data sharing in the social sciences.

    Reproducibility issues in the social sciences had already become more prominent after the twitter hype on #overlyhonestmethods (, but now I’ve talked to a few journal editors and fellow scholars and it seems things are moving. More and more journals in the social sciences are rethinking their replication policies, and I think the social sciences can learn much from the natural sciences’ guidelines. So far, most journals do not ask authors to give proof that they uploaded their replication data somewhere. I think changing this would be the way forward:

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.