In 2015, Angus Deaton received the prize for work on poverty, welfare and consumption that made extensive use of large-scale household surveys. That’s the kind of development economics that students my age cut their teeth on, and it’s still the mainstay of many government policy economists and think tanks in India and elsewhere. It seeks to answer big questions through big data, while never losing sight of the theoretical assumptions underlying your (tentative) inferences. The answers it provides are often very useful, but aren’t what anyone would call conclusive.
Some would say -- certainly, Deaton does, and quite often -- that this is quite the opposite of what Banerjee, Duflo and Kremer do. (I was taught as an unenthusiastic graduate student by all three laureates and co-edited, with Banerjee and others, a new book, “What the Economy Needs Now.”)
Professors at Harvard and MIT who are all associated with the Abdul Latif Jameel Poverty Action Lab, the three economists transformed their field by providing incontrovertible answers to clearly defined questions. If hardcore microeconomic theory is often accused of “physics envy” because of how it introduces complex mathematics into everyday questions, the economics these three practice is a different kind of physics altogether: the experimental kind. You set up a literal experiment, a “randomized control trial,” to determine whether a particular policy intervention works, and if so how well and at what cost. It’s this “experimental approach to alleviating global poverty” that won them the Nobel.
The catch? Usually, the clearer the answer, the more carefully controlled the question. If there’s one phrase that those of us who studied development economics in the 1990s could repeat in their sleep, it’s “correlation is not causation.” I saw all too many great minds tortured by a hapless search for instrumental variables that could turn a mess of survey data into a clean -- and publishable -- inference.
Things get much easier when, instead, you can supervise the creation of that data under controlled circumstances, as in a laboratory. Give half of a randomly selected set of schools cameras to photograph students and teachers every day; if the schools with cameras see teacher absenteeism fall by 20%, then you can be fairly certain the cameras had something to do with it.
The question, then, is whether that is a policy that can work at scale. And here we come to a crucial bit of hand-waving that is a red rag to people like Deaton. On the one hand, you insist that all you are doing is conducting an experiment under controlled circumstances and reporting the results. You’re not saying that this would work for all schools under all circumstances.
On the other hand, you are clearly implying that this is a policy worth scaling up to more schools: After all, it’s shown dramatic results in this trial. Deaton is perhaps correct that this sort of economics is brilliant at persuasion, but less effective at broadly advancing the sum of our knowledge.
While developing economies aren’t the only places where randomized trials are used to evaluate policy, it’s easy to see why they’ve seized on the idea. Countries such as India are always short of capacity. The average bureaucrat has far more policy proposals on hand than she has resources. You need to know in advance which one works; policy experimentation at scale can be extremely dangerous.
Certainly, randomized control trials have helped make development economists far more influential in actual developing countries than I ever thought they could be back in the 1990s. And such trials are partially responsible, also, for the reinvention of philanthropy: The post-Gates Foundation donor world focuses on ensuring that every dollar spent achieves the best possible outcome.
It’s easy to see why this infuriates many, and not just Deaton. For one thing, it’s deliberately depoliticized: Who can argue with an experimental result? This is incredibly useful in politically divided developing countries, since it means that pragmatic politicians of any stripe can implement the policies that development economists suggest. But, for critics on both left and right, this is bloodless technocracy that ignores the real problems caused by the System (left) or Culture (right).
I’m sure there’s some truth to that. Still, I am tired of people burdening economics with questions that it simply is not yet competent to answer. In a world where policy is too often made on a whim, amid lies, or to serve ideology, a prize for people who ask for a little bit of evidence first is worth celebrating.