It's a mystery why economists persist with imperfect methods of economics

The last week of December is always a time for reflection and introspection. This year it is particularly so. Economics, and where it is today, is a natural candidate. After all, 20 per cent of this century has already passed and we laymen must ask what it has to say for itself.

So here goes. The most important part of economics is its method. How is it going about its business now, as opposed to the past?

Until the beginning of the 1980s or so, someone would come up with a theory and then prove it with the help of mathematics and logic. Everyone, including the Nobel committee, would clap. But these proofs were in the abstract. They were also highly conditional: the “if we assume this, then that follows” method.

A new era dawned after governments started collecting more and more data. With an increase in data availability, proving a theory became increasingly dependent on increasingly sophisticated econometric techniques. These techniques essentially sought to solve the biggest problem of economics: causality, or the ‘post hoc ergo propter hoc’ problem. That’s Latin for ‘after this, therefore because of this’.

This led to the new regression-dependent economist. ‘Have data, will publish’ became the credo.

No one, however, paid much attention to the fact that human data is context-specific and therefore these results are not fully replicable. Abstractions from reality gave way to approximations to it. So, huge mistakes got made and, as a result, in the mid-1990s, the method of economics turned to experimental techniques. Economists tried to prove a theory by designing experiments. It kind of worked, in a very limited sense. But it didn’t solve the ‘post hoc ergo propter hoc’ problem.

Then, more recently, another technique was born. It’s called randomised control trials (RCTs). This basically reversed the sequence followed till then. The experiment comes before the theory because you don’t know what exactly you are looking for. For example, in this method you take two groups of people and see how they behave when one group is placed under slightly different conditions. Something like that, anyway.

Last year, two practitioners of this – the team of a husband and wife – were even awarded the economics ‘Nobel’. However, in my view at least, this validation is only as valuable as the Nobels of the abstract theorists.

With this in mind, I thought it would be useful to ask which of these four methods has made economics more useful. And the answer, as always in economics, is inconclusive.

Read Part 3: Macroeconomics needs to focus on managing abundance and not scarcity

Possibly the most important reason for this is the discipline’s belief, despite all the evidence to the contrary, that it’s perfectly all right to go from the particular to the general. In logic this is called induction, a completely discredited method after Karl Popper demolished it.

The most popular form of explaining this has been the swan example. It says “because all swans I have seen are white, all swans must be white”. This is not the case, of course. Yet, economics persists with it. It’s been a fatal flaw.

The opposite method is deduction. This goes from the general to the particular. Thus, all inflations are caused by an excess of money supply but all have an added, immediate reason like an increase in oil prices if you import a lot; or droughts if you don’t have enough irrigation.

Economists are generally quite intelligent. They know all this. But it’s a mystery why they persist with these approaches. That leads to the question: are they not being able to induct three new realities into their formal analytical structure?

I discuss this in the next two parts. One is on microeconomics. The other is on macroeconomics. />

 

This is the first part of a three-part series.e



Business Standard is now on Telegram.
For insightful reports and views on business, markets, politics and other issues, subscribe to our official Telegram channel