Thursday, February 28, 2019

Metaeconomics - Wrap Up

I did three posts in early 2015 on what I called "Metaeconomics".  I just felt that Economics, as practiced, lacked something fundamental.  Generally, there was Micro-Economics, the study of the small and specific, and Macro-Economics, the study of the large and more general.  But neither of them seemed to have anything approaching a "big picture" view of the economy as a whole. So I coined Meta-Economics in an effort to supply a truly big picture.

The effort was a failure.  I am only now, roughly four years later, returning to the subject.  Normally, I recommend going back to my older posts because I think they generally hold up well.  I can't do that in this case.  But, if you want to check them out anyhow, here are links to those earlier posts:
http://sigma5.blogspot.com/2015/01/metaeconomics-introduction.html,
http://sigma5.blogspot.com/2015/02/metaeconomics-panic-of-08.html,
http://sigma5.blogspot.com/2015/03/metaeconomics-markets.html.

The basic problem with Economics is that there has been no major breakthrough in more then 50 years.  For a long time "Keynesian" Economics (named after John Maynard Keynes) dominated the field.  The major ideas date back to the '30s.  For a few decades "Friedman" Economics (named after Milton Friedman) supplanted it.  It came to the forefront in roughly the '70s.  And in the last few years, roughly since the crash of '08, Keynesian Economics has come back into style, and is again the most popular economic theory.  There is no post-Keynes/Friedman economic theory that has been able to supplanting either or both of these old theories.

These theories contribute modestly to our understanding of the economy and have modest predictive power.  In some periods Keynesian Economics has a better predictive track record.  In other periods Friedman Economics has a better predictive track record.  Both have suffered major misses.  Both are only good for making predictions or providing explanations for the behavior of the economy as a whole.  There mechanisms for keeping the economy on track are "raise taxes substantially" or "cut taxes substantially".  These remedies are one step removed from operating some kind of simple on/off switch.

To provide a clearer picture of the dismal state of the "Dismal Science", a common nickname for the study of Economics, I am going to compare it with weather prediction.

A couple of hundred years ago weather prediction consisted of folk wisdom like "red sky at night, sailor's delight [good weather], red sky at morning, sailor's take warning [bad weather]".  To this was added the dependence of the general state of the weather on the calendar. In the northern hemisphere it is generally colder in the winter and warmer in the summer.  Events like the monsoon season tended to start and end at roughly the same time in the calendar year.

Slowly that situation improved.  People studied and categorized clouds.  They noticed that weather patterns tended to move from one area to another following roughly the same path.  As they moved they tended to evolve in predictable ways.  Things slowly evolved to the point where a one day forecast was pretty reliable in the '60s.

A contributing factor was the study of fluid dynamics, the way fluids like air behave, and the study of atmospheric chemistry.  This led to a theoretical ability to predict the weather.  The problem was that using "first principles" (the underlying chemistry and physics of the atmosphere) to predict the weather was impractical.  It might take a hundred years on the fastest computer then available to perform the calculations necessary to accurately predict tomorrow's weather.

There was also a severe shortage of data.  Sparse ground observations were available.  A few readings from perhaps a thousand points in the US were available.  The situation was even worse at sea.  Many merchant ships collected basic weather data as they went about their business.  But data was only available for the places the ship went and it was subject to delays sometimes measured in months.

Since then, two things have happened.  First, satellite data collection has resulted in the availability of large amounts of data for all parts of the earth.  And the data is available in near-real-time (a delay of perhaps an hour).  And the most powerful computers have become very much more powerful that those available in the '60s.  The super-computers now available can perform the exact same forecast calculation that would have taken a hundred years back then in something like a second.

So weather forecasts are now pretty good stretching out several days.  And forecasts of seasonal (or longer) weather trends are remarkably good.  And one of the things that has happened is that the "surprise factor" has been almost completely eliminated.  In the '60s a major storm swept into my area with zero warning.  More recently "Superstorm Sandy" was not accurately predicted (at least in the US) until it was almost upon us.  But in both cases improvements have been made that make a repeat of either event unlikely.

Back then there was little satellite coverage of the Pacific Ocean off the Washington Coast.  So the storm was invisible until it made landfall.  Now satellite coverage is much better.  But the key thing is that a weather radar has been installed on the Washington Coast that is capable of scanning out a hundred or so miles into the Ocean.

In the case of Superstorm Sandy, the biggest contributing factor was that the US only had access to wimpy super-computers in the Weather Bureau.  The Europeans had a much better super-computer (and a somewhat better software).  The US has since installed bigger super-computers and improved their software.  Superstorm Sandy drove home the message that the US needed to up its game.

The situation in my neck of the woods is actually pretty good now.  Forecasts are still off.  But usually what is going on is that the forecast errs slightly in its prediction of when or where something is going happen.  Completely missing a big storm, like what happened in the '60s, look to be a thing of the past.  Still, small errors in location or timing can make a tremendous difference in people's lives.

In the recent snowfall in my area my sister, who lives a few tens of miles from me, got about five times as much snow as I did.  I was able to get around in my four wheel drive Subaru if I really needed to.  My sister, who owns a similar car, couldn't.  The snow in her area was just too much for her car to handle.  So a "small" difference in location translated to a large difference in the impact the storm had on people.

Tornados are small events.  The touchdown area is perhaps a few hundred yards wide and a few miles long.  The conditions that cause a tornado to form are little different from the conditions where no tornado forms.  Small errors in distance, time, or conditions,can make the difference between life and death, or between being completely wiped out or suffering no damage at all.  The same is true when it comes to predicting the time and place of landfall for a Hurricane.  So getting it exactly right can be critically important.  But still, getting it really close to right is a big improvement over "I have no clue".

Now consider the situation when it comes to Economics.  There is no "big data" when it comes to economics.  An economist can run the latest state of the art model on a five year old PC and get the result almost immediately.  The models are relatively simple and the quality and quantity of data they have access to is similar to what weather forecasters had access to two hundred years ago.

If you gave the weather forecasting people a supercomputer that was a thousand times as fast as the ones they currently have access to they would have no problem keeping it busy.  More importantly, they would use the massive increase in computing power to turn out noticeably better forecasts.

Weather forecasters have to hobble their software because they only have about eight hours to turn out a forecast for what's going to happen a day out.  This forces them to make simplifications that substantially reduce the calculation's ability to get it right.  They have to do this in order to keep the run time of the forecast acceptable.

With a supercomputer that was a thousand times faster, fewer simplifications would be necessary.  That would make the forecast more realistic.  And that would make it more accurate.  An accurate forecast of what the weather will look like tomorrow that takes a week to run is useless.

Economists do not have this problem.  They don't have to hobble their models to get them to finish in a timely manner.  "Fast", in economic terms, is measured in days.  Anything that can turn out a result in a few hours is fast enough.  And pretty much any PC made in the last five years can do this for pretty much any economic model currently in use.

The problem is that there is little data to put into the models.  So even performing an elaborate and complex analysis of this data doesn't take very long.  This, coupled with economic models that are no more sophisticated than weather models of a couple of hundred years ago, results in economic forecasts that are not very reliable and don't tell you much.

Economies tend to evolve in a modestly predictable manner.  The economy of one country has some influence on other countries.  That's it.  That's all.  The result is that the crash of '08 was every bit as big of a surprise to economists as the storm that hit my part of the country all those decades ago.  There is no Economics equivalent of the satellite data that forecasters use.  There is no Economics equivalent of the "first principles" understanding of the economic equivalent of the chemistry and physics that undergirds weather models.

People who study the weather can calculate the average temperature for the US for a given year.  It is useful in tracking Global Warming and for not much else.  But they can also drill down to days and locations and tell you in great detail what happened.  In fact, they generate the "US average temperature" by, in effect, summing up all those detailed numbers.

The Economists equivalent of this one number is the GDP (Gross Domestic Product - a commonly used measure of the size of the economy as a whole) for the entire US for a particular year.  The difference is that's pretty much all Economists can tell you about the economic "weather".  They can't drill down and tell you what's happening in this small piece of the economy on this specific day.  Like average temperature for the year, GDP for the year tells you something useful.  But it doesn't tell you anywhere near as much as having the economic equivalent of a daily weather forecast for each small piece of the economy would.

But, whereas pretty reliable daily forecasts of the weather in a relatively small area are available, economists have pretty much no clue as to what is happening on a day by day basis or for small parts of the economy.  They can perhaps provide economic data for a state for a given month or quarter but finer grained data is just not available.

And if you don't have that kind of fine grained data, you can't do fine grained economic forecasts.  And, since you can't do these kinds of forecasts you can't test various models to see what works best.

Everybody has been "scoring" weather forecasts for accuracy since before any of us were born.  That feedback, this forecast got the right answer, that forecast got the wrong one, is the key to developing and testing competing theories of how weather works and, more importantly, how to forecast what it is going to do next.  Weather theories that work poorly get discarded in favor of theories that work better.  Sometimes "crazy" theories work better than sensible ones.  In the absence of competitive testing, "sensible" economic theories persist even when it is pretty obvious that they have major problems.

Keynesians have been battling with the people who follow Friedman for many decades now.  Both theories have major problems.  Why haven't both been discarded in favor of a theory that works better than both?  Because the ability of Economists to carefully test theories is so poor that both factions have insufficient reason to abandon their theory.

All each side knows is that the theory championed by the other side is "fatally flawed".  There may be some looney sounding theory that works better than either.  But it tends to get laughed out of the room without getting a serious test.  And that's because a serious test, one that is truly convincing to all the experts, does not exist.  There is always "sufficient reason" to not discard a theory.  On the other hand, the evidence supporting a new theory is always judged to be insufficient.  So little or no progress is made.

The economic equivalent to satellite weather data actually exists.  Banks and bank-like entities process billions of money transactions each day using computers and databases.  There is no technical reason why all this data can't be swept up and deposited into a central repository.  If such a central repository existed, and if economists had access to it, this would be a game changer.

It would finally be possible to accurately and reliably report what the "economic" weather was on a certain day and in a certain small part of the economy.  That, in and of itself, would be a massive change from the present state of affairs.  Economists would finally have access to big data.

With accurate and reliable data for small parts of the economy and spanning small periods of time, it would be possible to start creating forecasts and testing them against the data.  There would finally be a compelling reason to discard one economic model in favor of another.  The criteria would shift from "is it sensible or not?" to "does is work or not?"

It would also provide the data on which "first principles" could be developed that serve a purpose similar to the fluid dynamics, chemistry, and physics, understanding that is possible with the weather.  We could develop a theory of "money physics".

Various "money physics" theories have been developed at one time or another.  But it has not been possible to subject them to the kind of credible and rigorous testing that is immediately convincing to a large majority of people in the field.  It would be possible to shift the debate from "which theory do I like?" to "which theory works?"  That's an environment in which real progress becomes possible.

What I am talking about here is a theoretical possibility.  In theory, the data is available, but only in theory.  No one has seriously proposed that somehow all this data be made widely available to scientists in a way analogous to the treatment of weather data.  There are a whole host of reasons why it is presently inconceivable that such a thing would be allowed.  I am not going to bother listing them.

But the economy is like the weather in the sense that a bunch of small scale events combine to create the big picture.  People spend money, or not.  People buy this and not that.  Companies and governments behave similarly.  The aggregate of all of these financial decisions is then combined according to "money physics" rules that we currently don't understand very well.  The result is the amount of aggregate economic activity we see or don't see.

If you just say "tomorrow's weather is going to be exactly the same as today's" you will have about an eighty percent success rate at predicting the weather.  Predicting economic activity is slightly more complicated.  People's pattern economic activity depends on whether it's a "work" day or a "weekend" day, for instance.  But if you make a couple of small adjustments to account for things like workday/weekend, then you can achieve a very accurate forecast of tomorrow's economic "weather".  Unfortunately, professional economists are capable of little better than that.  They often don't even do that well.

This ignorance costs us all.  We all know that it should be possible to do better.  Economics that works would allow us to discard economic policies that are harmful and replace them with policies that are helpful.  The problem is that there is no way to convincingly rate economic policies on a harmful/helpful scale.  So people get invested in one economic theory or another without having any real idea if the theory they favor is better or worse than the alternatives.

To the extent that we can tell, there are a lot of harmful policies being pursued.  But the supporters of those policies can persuasively argue for their retention because the argument that the policy is harmful is on shaky ground.  That's because the arguments for or against any particular economic policy are on shaky ground.  So which policies are implemented and which are discarded depends more on the political power of supporters and detractors than anything else.

I'm out of ideas.  More importantly, the profession seems to be out of ideas too.  And this is in spite of the fact that the current "state of the art" in Economics is pretty bad.  As a result, progress is unlikely.  So, there isn't a good reason to continue the discussion of this subject.  So, for the moment I am wrapping it up.  If the situation changes, I'll reopen the subject.

If you want a ray of hope, geology was in bad shape in the '60s.  Then Plate Tectonics came along "out of left field" and revolutionized things from top to bottom almost overnight.  Geology has been an active field that has seen tremendous forward progress since.  The thing about a "Plate Tectonics" type idea is that no one sees it coming.  Economics could be revolutionized by a similar "no one saw that coming" idea at any time.  I don't know what such an idea would look like but that's exactly the point.

No comments:

Post a Comment