8 Comments
User's avatar
Liam Baldwin's avatar

Maybe Rossi’s takeaway was wrong or overly optimistic, but what is the correct takeaway? It does not necessarily follow that social science is doomed because successful interventions are rare (or evaluations of them are difficult).

Social scientists have doubled down on measurement and evaluation, and maybe they should still adopt a greater humility regarding ‘what they imagine they can design’, but how else are they supposed to respond?

Expand full comment
Ben Recht's avatar

Great question. It's important to consider the timeline here. These critiques all emerged in the 80s. In response, the 90s saw a refinement of methodology, the 00s a rise of accessible computing, the 10s an explosion in data. These were reasonable responses! And now we have to step back a bit and think about why more datafication, quantification, and computerization didn't fix the issues raised 40 years ago.

Expand full comment
Chris's avatar

Without knowing the details of the evaluations, or the social programs themselves, I wonder if "breaking even" isn't what success looks like. This could be because either a) the net utility produced is in the form of uncapturable positive externalities, or b) because the programs don't increase some form of public utility per se, but rather help more people remain feasible in their current way of life. A lot of programs exist to help people remain feasible in some way or another. That would seem to produce no net value if the baseline you're comparing against automatically assumed people will remain feasible, as they do on average, and not say, become homeless or jobless or ill. It also might show no net gain if you assume that the negative utility of those conditions is limited only to the economic impact.

But regardless, in either of those cases, more data and more compute isn't going to show more net value.

Expand full comment
Ben Recht's avatar

Yes. Part of the issue with many evaluations is the demand for quantitative metrics that must meet certain levels on average. This boxing in of possibilities of what "works" or "breaking even" mean inevitably narrows the possibilities of what policy can aspire to do.

Expand full comment
Angela Zhou's avatar

Great post! The folklore that "all causal effects in the social sciences are zero" had been knocking around in my head, but I had rarely heard about its origins in Rossi's laws.

I'm not sure if the conclusion from Leamer's Taking the Con Out of Econometrics (RIP) comes down so strongly against the quantitative social sciences. Or at least, methodologists since Leamer have a "weak reading" of the argument that doubles down even further on quantitative methods: given the fragility of inferences made from data arising on the social sciences, let's build _even stronger_ quant methods that can supply some robustness to assumptions. (At least this is a rationalization that I have followed). He has some works along these lines, too (like characterizing the set of estimates from weighted regression https://www.tandfonline.com/doi/abs/10.1080/01621459.1983.10477044) that predate by far recent interest in robustness. To be sure, the more you are robust to, the more likely it is that you can't say anything quantitative at all. But the path to concluding that quant methods are uninformative passes through more "credible" estimation via even more methodological work.

On "public discontent with this technocratic mindset has reached an all-time high":

I think there is a lot more to say on the political economy of evaluation and palatable social control. I'm reminded of Elizabeth Popp Berman's awesome book "Thinking Like an Economist" (great read of bureaucratic statistics), where she brought up the Community Action Program as a War-On-Poverty era program that instead had much more of a focus on civic engagement and community empowerment. Because it disbursed funds to local organizations, some of them started organizing against the political machine. The Johnson administration quickly put a kibosh on this, "reorienting community action agencies toward providing services to the poor, rather than empowering them".

Expand full comment
Ben Recht's avatar

Berman's book has been on my list for a while. I need to read it!

I agree with your take on Leamer. To be fair, Meehl similarly doubled down on quantification. I've just started working my way through Lieberson, and he doesn't think that quantification is bad per se. (He instead argues that it is problematic to mimic methodology from the natural sciences and center controlled experiments. I need to finish this book before I can say something more concrete. But it's interesting so far!)

What links those examples is that they raise red flags about current quantitative social science methods. But their alternatives didn't pan out either. Forty years later, I think it's worth trying to imagine what else we can do.

Expand full comment
rvenkat's avatar

Do you plan to consolidate the readings for the course into one file? I apologize if you've done it already and I missed it. Thanks!

Expand full comment
Ben Recht's avatar

I haven't done this yet, but I do plan to do so. Coming soon...

Expand full comment