Esther Duflo with Rwandan Coffee Farmers

The Challenge of Public Policy Research

Download PDF
Esther Duflo with Rwandan Coffee Farmers
Esther Duflo with Rwandan Coffee Farmers

A recent issue of The New Yorker (May 17, 2010) includes an interesting article called “The Poverty Lab.”  It features Esther Duflo, a faculty member at MIT who co-directs the Abdul Latif Jameel Poverty Action Lab.  She is an economist from France who is interested in helping people in poverty.

There essentially are two competing views on how to help poor countries.  One view says that you concentrate large amounts of money in those countries.  The other view holds that large amounts of money have yielded little in terms of long-term economic improvement.  The second view says that poverty is more likely to be “eradicated by the local action of democracy and markets.”

Esther Duflo believes that there is no data to support either argument, and her approach is to test these and other social policies through randomized control trials.  Here is a link to a talk she made on this point at the TED conference.  According to Duflo, randomization “takes the guesswork, the wizardry, the technical prowess, the intuition, out of finding out whether something makes a difference.”  There are practical and ethical limitations on when social policy questions can be answered by randomized trials—it requires large treatment and control groups that may be hard to create, and “[r]andomization causes some part of the population to miss out on the new thing.”

The article is especially interesting to me because it highlights the challenge of getting people to accept research on the effectiveness of particular public policies.  For example, Duflo wanted to use her experimental methods to evaluate the effectiveness of microfinance as a tool for combating poverty.  “I have one opinion—one should evaluate things—which is strongly held.  I’m never unhappy with the results.  I haven’t yet seen a result I didn’t like.”  Many in the loan industry responded defensively, however, when her study of microfinance showed that it did not improve the economic well-being of loan recipients.  It surprised Duflo because she “can hardly imagine people who do not welcome solid, hard-won data about their field of interest.”  The article questions one of her key assumptions—that policymakers “surely want to be associated with ideas that work.”  “After all, a politician may prefer to be associated with hoopla, or with cash in plain envelopes.”

The Poverty Lab’s “overarching experiment, perhaps, is a study of whether it can make its own agenda irresistible to decision-makers.”  Duflo describes the next phase of her work as “the effort to engage decision-makers not just as experimental partners but as adopters of programs that have already been vetted.”  One of our greatest challenges in implementing our public policy initiative will be to work in ways that maintain the School’s credibility with policy decision-makers.  Can we structure our policy work to increase the chance that they will value the results?  How do we avoid being viewed as advocates for certain policy outcomes?  I am confident that the strategic planning implementation committee co-chaired by Aimee Wall and John Rubin will come up with a good plan.  We don’t have the option of responding to North Carolina officials the way Duflo responded to the microfinance community: “We tried to help them.  They don’t want to be helped.  Too bad.”

1 thought on “The Challenge of Public Policy Research

  1. All I can say is AMEN.

    Mike, and others, you might be interested in the fact that the approach of randomized control trials is officially the executive branch standard for evaluating programs, a standard established by the Office of Management and Budget in the White House. It was put in place, surprisingly for an administration not prone to scientific approaches to policy, under the Bush Administration as part of the Program Assessment Rating Tool. The document outlining how controlled trials are the standard is found in the document “What Constitutes Strong Evidence of a Program’s Effectiveness?” at http://www.whitehouse.gov/omb/assets/omb/performance/2004_program_eval.pdf.

    There are four MAJOR challenges to this standard, however: it is very difficult to do random controlled trials in many uncontrollable social situations, it is expensive, it usually takes a LONG time to establish results, and it raises ethical issues (do you deny food stamps to some families to see if they can get food another way?).

    However, faced with the alternative of just guessing as to whether something is working and throwing money at “sounds like a good idea to me” things, we need to work toward this goal.

    The crux of the issue, though, is sometimes people don’t really want to know if the program actually works. I have had several experiences with this. One, where the client rejected the final produce and results because she simply could not believe the results, even though she had walked through the methodology and data with us every step of the way. In another case, the client wanted an evaluation plan in order to get a grant, but once it came to actually doing the work to assess impact of training, we did not hear from her again. The assessment was never done – mainly because the client did not feel it was important in the first place, and only agreed to the plan in order to get the foundation money. In another case, a city dropped a project mid-stream when it was clear that the results might not support the already established by not public policy position of the administration.

    Some folks like policy work when it strengthens a position they have already taken. But their reaction would likely be the same to any work or advice we give. Where we can make a difference is in the opportunities where the client really does want to think about how to do their work better. I found that in work for the Golden Leaf Foundation and Prisoner Legal Services.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.