Smith: There’s too much red tape (but only a little)

Noah Smith

I’m very sympathetic to the idea that regulation holds back growth. It’s easy to look around and find examples of regulations that protect incumbent businesses at the expense of the consumer; for example, the laws that forbid car companies from selling directly consumers, creating a vast industry of middlemen.

You can also find clear examples of careless bureaucratic overreach and inertia, like the total ban on sonic booms over the U.S. and its territorial water (as opposed to noise limits). These inefficient constraints on perfectly healthy economic activity must reduce the size of our economy by some amount, acting like sand in the gears of productive activity.

The question is how much. Hard-core free-marketers often claim that the cumulative effect of regulation is very large, and that dramatic cuts in regulation could boost economic growth for many years.

The problem is that it’s very hard to find solid evidence to back up this assertion.

If regulation is less harmful than the free-marketers would have us believe, we risk concentrating our attention and effort on a red herring. But because regulations are all very different, and they act on different industries, simply getting an idea of the overall cost of regulation is a daunting task.

That may have changed with the advent of RegData, a database of regulations compiled by George Mason University’s Mercatus Center think tank. RegData uses text mining to count the number of times that federal regulations use words like “shall” or “must” with regard to a specific industry. The frequency of those words is used as a measure of how heavily the industry is regulated.

Though that’s a very imprecise yardstick by which to gauge regulatory severity, it’s the best we’ve got, at least for now.

As a result, researchers are now using RegData to study the impact of regulation on various economic outcomes. For example, a 2014 paper by Alex Tabarrok and Nathan Goldschlag finds that the rate at which new businesses are started in a particular industry seems unrelated to how strictly that industry is regulated. That implies that regulation is not the main reason that U.S. business dynamism and entrepeurialism is declining.

Goldschlag and Tabarrok’s paper has already led many, including me, to question whether a nationwide focus on finding regulations that are a needless drag on growth might be a wild goose chase.

But a new paper by Mercatus scholars Bentley Coffey, Patrick McLaughlin, and Pietro Peretto claims that regulation is a long-term killer of economic growth. Their result has focused new attention on the downsides of the regulatory state.

For example, Bloomberg View columnist Megan McArdle writes:

“An economy with but one regulation… would probably not find this much of a drag on growth. But multiply those regulations by thousands, by millions, and you start to have a problem… [Coffey et al. find] that the growth of regulation between 1977 and 2012 has shaved about 0.8 percent off the rate of growth, costing the nation a total of $4 trillion worth of GDP.”

An extra $4 trillion in gross domestic product would make us almost a third richer as a nation – nothing to sneeze at, especially in this era of stagnant income. But while I agree that we should be taking a harder look at many government rules, I have my doubts about the evidence here. I’m just not sure that Coffey, et al. have a strong case for that $4 trillion number.

First, whereas Goldschlag and Tabarrok’s paper is almost purely empirical – it just compares regulation to startup rates directly – the paper by Coffey, et al. relies on a theoretical model of economic growth. And when a paper depends on macroeconomic theory – especially growth theory – I get suspicious. Long-term growth models can’t really be verified with data, since we have only a short period of documented history to work with and the growth models rely on very long-term effects. To really know what makes economies grow, we’d need centuries of observation, but we have only decades. And the small amount of data we have actually gives reason for skepticism that the huge long-term effects in the Coffey model really exist.

Even more concerning, the vast complexity of Coffey, et al.’s approach makes statistical estimation much more difficult. Because their model of the economy is so complicated, they have to estimate a total of 1,626 parameters. They do this using only 34 years of data. As anyone knows who has worked with statistics, estimating a large number of parameters with a small number of data points isn’t going to yield reliable numbers.

So I’m very skeptical of this paper’s results. Instead of a simple, clean result like that offered by Goldschlag and Tabarrok, this study relies on an elaborate Rube Goldberg machine of macroeconomic theory. That doesn’t mean it’s wrong, just that we should hesitate to take its numbers too seriously.

This is important, because focusing too much on deregulation might actually hurt our economy. Many government rules, such as prohibitions on pollution, tainted meat, false advertising or abusive labor practices, are things that the public would probably like to keep in place. And reckless deregulation, like the loosening of restrictions on the financial industry in the period before the 2008 credit crisis, can hurt economic growth in ways not captured by most economic models. Although burdensome regulation is certainly a worry, a sensible approach would be to proceed cautiously, focusing on the most obviously useless and harmful regulations first (this is the approach championed by Bloomberg View columnist Cass Sunstein).

We don’t necessarily want to use a flamethrower just to cut a bit of red tape.

Noah Smith is an assistant professor of finance at Stony Brook University and a freelance writer for finance and business publications.

© 2016, Bloomberg View