Monday, January 19, 2015

Metaeconomics - An Introduction

This is a subject I intend to return to repeatedly.  This specific post is intended to create and promote a particular definition for a term.  "Metaeconomics" is not a new term.  But a Google search only yields 8,520 hits.  That is a pathetically small number.  Try to enter any search term into Google and get less than 10,000 hits.   And, as commonly happens, most of those "hits" are off point.  Only the first nine hits are on point.  So what do I have in mind?

Back while I was in college I took the usual "Introduction to Economics" class.  In that class I was introduced to the terms "microeconomics" and "macroeconomics".  Little has changed since the late '60s when I took the class.  Those terms are still in common use.  For the purposes of this piece I define microeconomics as the economics of the individual and macroeconomics as the economics of groups.  They are presumably connected by the fact that groups of individuals, each acting as microeconomics predicts, will, when aggregated into a group, cause the group as a whole to behave in the manner macroeconomics predicts.

Now many would disagree with these definitions and their complaints would be legitimate.  But I am going to ignore them anyhow.  Instead I am going to make an observation, actually a chain of observations.  We had in 2007-2008 an economic event of epic proportions.  Yet it was not predicted by anyone.  Specifically, it was not predicted by any economist.  That's bad.  I have now read many books on the subject.  Some were written by economists and some by others.  Several of them just attempted to provide a narrative of what happened.  But a number of them, after doing a nice job on the "what happened" part of the story, tried to then move on to the "why" part of the story.  And a few moved finally to "what should be done".

I want to point out one specific book in particular.  "The Shifts and the Shocks" is by Martin Wolf.  Wolf is a long time and well respected writer for the Financial Times.  Roughly speaking, the FT is the British equivalent of The Wall Street Journal.  Wolf is not a professional economist but he is thoroughly familiar with the work of professional economists.  And he is a long time observer of the financial scene.  And he does the "what happened" part followed by the "why" part followed by the "what should we do" part as well as anyone.  Like others I would score him as doing an excellent job on "what happened", a pretty good job on "why", and a terrible job on "what we should do.  Now back to economics.

Attempts to explain and understand the economic actions of groups falls into the "macroeconomics" bucket.  But it does a poor job.  Macroeconomics does a pretty good job analyzing a single market, say Oil.  But here you have a homogeneous target.  We are just talking about the Oil market.  And macroeconomics does a pretty good job not just with Oil but with many other markets too.  What it does a poor job of is when you take things to the next level and try to understand what goes on when many markets interact.  But that "meta" question is more important than understanding markets where each is considered as a stand alone proposition.  It is when you combine all the markets together that you get the world economy as a whole and that's what affects everyone the most.

So one idea behind Metaeconomics is to move to the next level and study large aggregations of markets.  But I want to narrow the focus from the large vague mission of understanding the economy as a whole.  Economists have been trying to do that for something like 200 years.  But the recent experience with the 2007-2008 crash shows just how poor a job they are doing.  This very problem is one of the reasons I recommend Wolf's book so highly.

Wolf looks at all the main strains of economic theory.  I mention Keynsianism to give you an idea of what I am talking about.  I mention it because it is the one people are most likely to have heard of.  But he works his way through all the major strains including one that was popular about 150 years ago and went out of favor about a hundred years ago.  All the theories share one common attribute.  Somewhere along the line they have gotten some major event wrong.  Paul Krugman has been championing a Keynesian approach to our current economic malaise for several years now.  But Krugman would be the first to admit that Keynsianism has made major blunders if applied uniformly over the last 60 years or so.  The problem is that it's not just Keynsianism.  Pick your poison.  Every one of them results in a major blunder when applied to the economic history of the last 100 years or so.  In fact, what typically happens is that a particular strain becomes the favored one.  Then it results in a massive blunder and it is replaced either by a new strain or a rehabilitated version of an older one.  But none of them gets it right all the time.

Now this is not a unique problem.  It happens all the time and all over the place.  To pick a definitely uneconomic example:  there is the argument among scientists about the nature of light.  There were two strains, the "it's a particle" strain and the "its a wave" strain.  The argument went back and forth for hundreds of years.  One strain would gain favored position.  Then some new experiment would come along and embarrass it resulting in the other strain gaining favored position.  The impasse was finally broken by a "third way" strain.  Light behaves like a particle in some situations and like a wave in others.  But in some situations it behaves like neither.  It is a "photon", neither completely particle nor completely wave.

Physicists arguing about the nature of light accepted the results of well conducted experiments.  They just argued about the way those experimental results are interpreted.  And that's the way economists should behave.  But a lot of the time they don't.  There are adherents to each of the major strains of thing in economics.  Mostly they just stick with their pet theory and argue that it is more right than the other guy's.  There needs to be a score keeper.  And that's part of my proposal.  But before I get to that, I want to make another digression.  Why?  Because I can.

I was never much interested in Geology.  This was in spite of the fact that I had an aunt whom I adored who had a degree in and a love for Geology.  I like areas where there are a few key organizational principles.  Geology at the time was the opposite.  Geologists knew a lot about a lot of things.  They had characterized tens of thousands of minerals, for instance.  And they knew a lot about some geological phenomena.  They knew, for instance, how to turn a large mountain into a series of small hills by adding large quantities of weathering and time.  What they had no clue about was how you built up a mountain or why the pattern of prevalence of minerals was what it was.  So, to oversimplify, Geology was a bunch of bags of unrelated facts.

Then Plate Tectonics came along.  Plate Tectonics provided the organizational principle under which you could arrange your previously unrelated bags of facts.  Plates collided to push up mountains.  These and other processes explained why minerals appeared in certain patterns.  And so on, and so on.  I became familiar with Plate Tectonics in the late '70s and early '80s.  Somewhere along the way I came across an "Introduction to Geology" text book that had been published in 1968.  This was about the time when the early formulations of Plate Tectonics were being created.  But it was far to soon for a new theory to appear in a text book aimed at undergraduates.

So there it was.  One chapter would discuss one bag of facts.  Another chapter would discuss another bag of facts.  If you weren't looking for it you wouldn't notice it.  There was no organizational principle to tie the chapters together.  They were each required to stand on their own two feet with no buttressing ideas to show how they related to the material in other chapters.  A lot of people took a Geology class and were perfectly happy.  For them the "bags of facts" approach was good enough.  But "bags of facts organized and presented so that they all reinforce and support each other using the scaffolding of some organizing principals" works much better.  I contend that Economics is now in the same position as Geology was before the advent of Plate Tectonics.

Now, for another digression.  But this digression leads us pretty directly to where I want to go.  There is in Atmospheric Sciences (the academic version of "weather man") circles something called a GCM.  GCM is often referred to as a Global Circulation Model but the preferred styling is as a General Circulation Model (at least, according to Google).  These are mathematical constructs that model the weather.  At their foundation the models are simple.  There is a branch of mathematics called Fluid Dynamics.  This is the study of the flow of fluids around shapes.  Air is a fluid and it flows around shapes like mountains.  So the equations of Fluid Dynamics, originally developed to understand how to make wings and propellers on airplanes work better, apply to how the weather works.  A subspecies of Fluid Dynamics is the modeling of problems on computers.  This leads to the now well developed field of CFD, Computational Fluid Dynamics.  CFD can be used in any situation involving fluids and shapes.  Change a few parameters and you can use the same CFD program to study the hulls and propellers of ships or how stars go supernova.

At some point the Atmospheric Sciences crowd said "Hey!  There are these cool CFD programs out there.  I bet we could use them to model the weather."  So they did and got interesting results.  One of the early efforts was the Carl Sagan "Global Winter" initiative.  Very simple CFD models were used to model what would happen if a large number of nuclear explosions took place over a short period of time.  The short answer turned out to be "bad things".  There was a lot of argument about whether Sagan and his people had gotten it right.  And there was general agreement that the Sagan model was too simplistic.  Still the result was interesting and the model was interesting.  So people picked and poked both with respect to the model and with respect to the data.  To this day you can still get a spirited argument about whether the Global Winter scenario is valid or not.  The one thing people agree on is that the model was an interesting and useful tool.

Then along came some other interesting and apparently unrelated data.  A fellow named Charles David Keeling decided it would be a fun idea to start continuously measuring the amount of Carbon Dioxide in the air.  He only had enough money to set up one measuring station.  He figured that the Maua Loa Astronomical Observatory in Hawaii was a good spot.  It was high on a mountain in a place that was away from large populations and industries.  It was effectively out in the middle of nowhere being pretty much smack dab in the middle of the Pacific Ocean.  When he started he had no real idea of what he would find.  Generally, he figured it would go up and down but he had no idea what the long term trend would be.  And it does go up and down.  But the up and down follows the calendar, religiously.  It always goes up in the Summer (in the Northern hemisphere) and down in the Winter.  This was interesting but not that interesting.  What was "that interesting" was to compare the year over year figures.  Pick a month or average a year.  It doesn't matter which month or which pair of years you pick.  Every year the new numbers are higher than the old ones.  Every time.  That was completely surprising.

After he had been at it a while Keeling published his data.  The firs thing that happened was people went at him for not doing the measurement right.  But everything checked out.  He was doing the measurement right.  Then people stood around for a while waiting for the trend to break and the year over year numbers to go down.  After they had waited a few years they gave up on that.  And in the mean time others started setting up similar equipment and making similar measurements.  And everyone got pretty much the same result.  The amount of Carbon Dioxide in the atmosphere was going up, once you subtracted out annual variation (the seasonal change).  And by this time people knew that Carbon Dioxide was a greenhouse gas.  It contributes to the "greenhouse effect".  It's warmer in a greenhouse than it is outside.  So people started asking "is the earth suffering a greenhouse effect such that it is getting warmer"?  This was the start of the whole "Global Warming" issue.

I don't want to get into the "argument".  Scientists say "yes, the earth is warming".  There is a well organized "denailist" community.  But except for a few pet scientists it is entirely composed of people who for political or economic reasons find Global Warming to be an "Inconvenient  Truth".  One thing the denialist crowd did was to loudly shout "show me the proof".  And they had enough political pull to make sure they were and are heard.  One of the most important responses of the scientific community was to pull out and dust off GCM models.  Then the denialist community loudly shouted "there is this and that and the other thing wrong with your GCM".  Scientists responded by enhancing and improving their GCMs.  The modern ones are very sophisticated and do a very good job.

One unexpected side effect of this whole Global Warming GCM effort has been to settle a lot of debates within the science community about how weather works.  In case after case someone says "it works this way".  People build "this way" into one or more GCM models and the model either works (does a good job at predicting the weather) or it doesn't.  If the GCM using a particular theory of how something works does badly people take a hard look at the theory.  If after a certain amount of tinkering and tweaking the theory still doesn't work, as measured by the accuracy of GCMs, the theory gets discarded in favor of the ones that do work.  The GCMs have come to represent a neutral scorekeeper in determining which theories are right and which are wrong.  This has resulted in a considerable amount of advance in our understanding of the weather.  Weather theories do not go on forever.  Some live and some die.

And this is a big problem with economic theories.  They are all "zombie" theories.  They can't be killed.  As noted above, Wolf spends a considerable amount of time on a theory from the 1800's.  It was dead a hundred years ago and then it was not.  It can't seem to die even though it has major flaws that no one has figured out how to fix.  And it's not just this one theory.  It's all of them.  There is no neutral scorekeeper in the Economics business.  So:

I propose that the first order of business for Metaeconomics be to undertake to build GCMs that model not the weather but the economy.

Weather GCMs model the flow of a fluid, air.  Economic GCMs would model the flow of money.  Money is not a fluid so Fluid Dynamics is not an appropriate foundation.  But money does have properties that follow mathematical rules.  At the simplest level, there are processes that create money and processes that destroy money.  Other than that, money flows from one place (typically an account) to another.  This flow is lossless.  The total amount of money doesn't change, just its location.  Economists think they understand all three behaviors.  You can find many learned papers on where and under what circumstances money is created.  You can find many learned papers on where and under what circumstances money is destroyed.  The rules of accounting are Johnny on the spot to demand that money be always exactly accounted for.  So, after you have accounted for the creation processes and the destruction processes, you have a "conservation" law.  Other processes (e.g. all those processes that neither create not destroy money) conserve the amount of money.  This, like Fluid Dynamics in the "weather" case, provides enough of a foundation to permit the flow of money to be modeled mathematically.

There are other issues.  Many billions of dollars (or the equivalent in Pounds Sterling, or Yen, or Deutschmarks, or etc.) are expended measuring with extreme accuracy exactly how much money is at a specific place at a specific time.  The process is called accounting and the expectation is that the location and amount of money in very complex financial entities can be measured so accurately that the books balance "to the penny".  Atmospheric Scientists would be ecstatic if they could measure the basic parameters of weather to anything like a similar accuracy but they can't.

This is a roundabout way of saying that a GCM for economics will look a lot different than a GCM for the weather.  In theory the weather is simple.  But the amount of data that must be processed through these simple processes stresses the capabilities of even our largest supercomputers.  The economic GCM has the opposite problem.  The amount of data is modest but the complexity of the system is immense.  And the "hard thing" for each of the two systems is different too.  In weather GCMs there is the "butterfly effect".  The idea is that if a butterfly flaps its wings at the wrong time in India it may eventually affect the path of a Hurricane in the Caribbean.  This is a cute way of getting at the fact that small changes in the initial data can snowball into large differences in a forecast.  Weather GCM modelers are forever trying to come up with new ways to deal with this well known problem.

In the case of economic GCMs the problem is "animal spirits".  This cute phrase has to do with the fact that "human beings making decisions" are an integral part of how the economy behaves.  Given absolutely identical situations humans may react quite differently.  Theoretically, this should wash out when we replace one individual with a large number of individuals.  Instead of dealing with the vagaries of a specific individual we are dealing with the statistical average of a number of individuals.  This is supposed to "wash out" the variation and make crowds more predictable than individuals.  But it doesn't.  Crowds engage in "herd behavior" and the herd behavior is no more predictable than that of individuals.  Again, when confronted with the same situation the herd will behave quite differently in one situation than it does in another essentially identical situation.  This "different decision in the same circumstances" problem what economists mean when they talk about the "animal spirits" problem.

As a specific example, suppose you give a bunch of people a raise.  Simplistically, they have two choices:  they can save the additional money or they can spend it.  In the actual case we are dealing with the extent to which, on average, people save versus the extent to which, on average, they spend.  If the answer is "mostly save" the economic result is one thing.  If the answer is "mostly spend" the economy result is quite different.  The "animal spirits" problem drives economists nuts pretty much the same way the "butterfly effect" problem drives Atmospheric Science people nuts.

I propose that in the initial round of economic CGMs we input the animal behavior parameters (e.g. the save/spend ratio is 36% save at a particular place at a particular time) as data.  But once we have some monetary GCM models that work pretty well we can experiment with different animal spirits models.  If a particular model does a good job of reproducing the actual behavior of people in a variety of situations then it is a pretty good model.  If it doesn't then either the model needs more work or it needs to be discarded.  This is exactly what the Atmospheric Sciences people do with their weather GCMs.  They will plug in various approaches to dealing with a specific situation and see which one seems to work the best in the model.  Many approaches can be tried on many situations because there are a number of GCM models and even more modelers.

Even the objective of various weather GCMs are different.  Some are set up to predict the weather for the next few days.  Others are set up to predict trends lasting months or years.  Still others are set up to model much longer periods of time.  How are they tested?  "Initial conditions" data is input.  In the case of a "next few days" GCM it might be yesterday's actual weather.  The model is then run and the output sampled to see what the high temperature (or rain, or wind, or any number of other things) gets forecast to be for each of the next several days.

Economic GCMs can be built and used in a similar manner.  One might be used to forecast next quarter's GDP.  Another might look out a year or more.  The formal objective of a Metaeconomic GCM would be to model the economy of the entire world.  But "micro" forecasts could be prepared that covered only a single country or region.  Then there's the problem of "the weather only did one thing yesterday".  The Atmospheric Science people deal with this problem by modeling various historical periods.  If we start with the weather exactly thirty years ago does the GCM accurately predict what we know happened over the next few days thirty years ago?  By picking a variety of situations from history Atmospheric Scientists are able to test drive their GCMs in a wide variety of situations.  This eliminates GCMs that are hard wired to get a few cases right or those that were just lucky.

Monetary GCMs can be tested in a similar manner.  They can test drive their models over a number of points in history or of a number of countries or regions.  Since modelers would be starting from scratch, the best place to start is with scenarios that are generally believed to be easy.  Then as the models improve you keep trying them on harder and harder scenarios.  Given that everyone missed it the 2007-2008 scenario deserves an "extremely hard" rating.  But I expect GCMs to initially fare very poorly, even on the "very easy" scenarios.  But the failures should teach important lessons.  And the fundamental lesson of Science is that you learn more from failure than you do from success.

I expect all the current strains of economic thinking in their pure forms to fail.  But consider F=MA, the famous equation from Newton.  It is known to be wrong.  So why do we keep it around?  Because it is much simpler and easier to deal with than the correct Special Relativity formulation and it works well enough in a lot of common situations.  So engineers and scientists have developed a handy rule of thumb.  If the numbers fall into a certain range then use "F=MA" with confidence.  Otherwise, they use the more complex Special Relativity version.  Perhaps a similar thing can be done with the various economic strains.

If the numbers fall into this range then use Keynsean methods.  For this other range use this other method.  And so on.  Perhaps there is a "meta" rule that would allow a combination of strains to work in a far broader wide range of situations than any one pure strain now does.  That would definitely put us ahead of where we are now.  And it might turn out to be the case that there are ranges of numbers that we actually encounter in the real world where none of the strains work.  That points us to a place where a new strain needs to be developed.  The details of how the field progresses don't matter.  The point is to make the field progress, hopefully much faster than it now does.  The current "zombie strain" where no strain ever completely dies (or even gets completely replaced with a "new and improved" version of itself) is impeding progress.

And, in the same way that the weather GCMs are helpful beyond their ability to advance our knowledge of atmospheric sciences, the monetary GCMs would be helpful in improving the economy.  What's not to like about that?

No comments:

Post a Comment