11 Comments
Oct 20, 2023Liked by Ben Recht

I don't disagree with your characterization of utilitarianism. But I recently re-listened to an episode about it in the History of Ideas series (https://podcasts.apple.com/ua/podcast/bentham-on-pleasure/id1508992867?i=1000508318017), which basically claims that a utilitarian lens was really developed as an analytical tool rather than a prescription. In his day, Jeremy Bentham wrote about utilitarianism in order to advocate for progressive, even radical causes, like divorce, equal rights for women, and the decriminalization of homosexuality.

Expand full comment
author

Yes, I don't think utilitarianism is necessarily a "right" or "left" wing ideal. It's an additional axis. There are plenty of left-wing technocrats who love utilitarian economic analyses, right?

Expand full comment
Oct 24, 2023Liked by Ben Recht

That's true, but I also think that today's utilitarians are not undertaking a radical project in the same way Bentham was. Maybe because its rationalizing and technocratic lens has been so thoroughly assimilated by the powers that be? Something something power of capital instead of the church?

Expand full comment
author

Yeah, I agree with your assessment of technocracy.

Bentham existed in a world before Yule and Keynes, Neyman and Fisher, von Neumann and Dantzig. Once all of this language of optimization was formalized and turned towards policy, his utopian ideals look like grotesque tools to reify statist authority.

But Williams' critique of Bentham also points out that there are some essential flaws to utilitarianism that predate modern data-fication. Notably that individuals are responsible for the actions of others and for any bad outcome that they personally don't prevent.

Expand full comment

>>And yes, this is part of the reason why utilitarianism and its mutant children effective altruism and existential-risk millenarianism are not serious philosophies. Some things are not mathematically frameable as optimization problems. Morality is one of them.

I agree with Sarah above: I am not sure utilitarianism was ever meant to be a mathematical optimization technique (though perhaps that's how people think about it now?) but a form of philosophical analysis. From what I can tell, in any debate about an issue, utilitarianism forces you to make the case for your alternative based on what you think is "good" for the world and deontologism forces you to make a case based on some notion of abstract rights. If the objection to utilitarianism is that there are many alternative conceptions of the "good," the same might be said of deontologism: that there are many different rights at stake in an issue (and perhaps different rights for different stakeholders). No philosophical system is going to help us solve our problems if we disagree on what to do.

I share your skepticism of mathematical optimization as a way to answer questions but the bigger problem is that there are big disagreements and the particular philosophical lens matters very little. I personally prefer utilitarianism because it forces someone to articulate their vision of the good and their priorities; it seems like that should lead to a more honest debate.

Expand full comment

> Hypothesizing odds and costs of the unknowable doesn’t make you rigorous and rational. It just makes you a pedantic bullshit artist.

And yes, this is part of the reason why utilitarianism and its mutant children effective altruism and existential-risk millenarianism are not serious philosophies. Some things are not mathematically frameable as optimization problems. Morality is one of them.

I love this so much, can we get a rant in class?

Expand full comment
author

Hmm. Remind me on Tuesday. It would actually be appropriate then.

Expand full comment
Oct 18, 2023·edited Oct 18, 2023

>You might even say that a probabilistic model is ill-suited to predict one-off events (I would say this!).

But everything we predict is a one-off event, right? The universe is never in exactly the same state it was previously, yet models are often capable of making accurate predictions. The challenge to me is making sure the test set is both (1) large enough and (2) "similar" enough to the data the model will see in the wild to convince you it will work on new data. For a lot of image classification problems, this is pretty easy, but for predicting the outcome of a presidential election, this is very, very hard—at least with data that it publicly available (I personally don't take any election models seriously).

>Associating this with a dollar amount is grotesque and heartless.

Russ Roberts often expresses a similar sentiment, but I have to disagree. Dollars are a good enough reflection of people's values to be a useful guide in high-stakes decisions. To phrase it another way, would hiding such data from people lead to them making better decisions for themselves (i.e., with regards to whatever their personal "utility" functions are)? I strongly believe the answer is "no" for me. Note, however, that I'm not suggesting society should necessarily be optimizing a collective utility function based on these numbers. I agree with your overall point about "optimizing morality".

Anyway, thanks a lot for the blog. I read every post!

Expand full comment
author

I totally agree with point 1. And I wish I was better able to articulate how I see the line between the repeatable and predictable and the singular and unknowable. I'll keep blogging until I figure it out? (I shouldn't commit to impossible projects)

With regards to point 2, I'm concerned that most people can't associate monetary costs to the level of precision needed for decision theory to be applicable. Especially if we're talking about the value of happiness, the value of health, the value of our loved ones. And, a conversation for another day, but I think that when we let "the market" decide, it dramatically undervalues such things.

In any event, thank you for reading, and thank you for commenting! I value your feedback.

Expand full comment

>Associating this with a dollar amount is grotesque and heartless.

Like @Michael above, I too have to disagree. I agree that we shouldn't take cost-benefit analysis too much at face value; there are too many things in it that can be disputed but a cost-benefit analysis is a useful technique to force a clear debate between different sides. Ezra Klein wrote a great piece on the NHS's QUALY analysis (https://www.vox.com/2020/1/28/21074386/health-care-rationing-britain-nhs-nice-medicare-for-all). The UK forces medical and pharma researchers to specify a "quality of life" metric for their drug; in other words, they don't want to just pay for a drug if it prolongs life if the quality of life is bad. It then puts a money figure on the quality of life. As Klein puts it, "If a treatment will give someone another year of life in good health and it costs less than 20,000 pounds, it clears NICE’s bar."

It might sound grotesque--when I assign this piece in my Berkeley classroom, half of my students are horrified and the other half shrug and say they'll take it--but once again, as Klein puts it, "But there’s no system, anywhere, that doesn’t make these judgments in one way or the other. QALYs simply make them explicit, visible, and, importantly, debatable."

Obviously, in the NHS case, it isn't the "market" deciding; it's a group of experts but still, for me, I'll take a honest and transparent debate over cost-benefit analysis any day over an abstract debate between different "values."

I'm curious about your take on this NHS stuff though.

I also wanted to defend utilitarianism but another time!

Expand full comment

Regarding the money quote, I would say the "not mathematically frameable as optimization problems" property due to unknowableness is present in increasing degrees in the frameworks you list. Thus, a plain utilitarian view might be fairly reasonable in scenarios with contained uncertainty; however, as you start unrolling your predictions and building Rube-Goldberg cause-and-effect chains, all your uncertainties, unknowns, nonlinear effects and disregarded terms (which often comprise the entirety of socioeconomic phenomena) start compounding exponentially, particularly if you don't apply a discount rate. As the epistemic-moral project gets more ambitious, arguments necessarily become less rigorous and the scope for motivated hand-waving becomes galaxy-sized (perhaps galaxy-brain-sized). "Better optimization" under massive model misspecification makes it worse, as optimizing efficiency means seeking high leverage mechanisms: the ones that are more likely to be brittle and bring you outside the epistemic trust region. It's a quite slippery slope out there.

Nevertheless, the monotony argument works in reverse as well: surely some moderate amount of utilitarianism or evidence-based whatever is helpful, as long as you are careful and modest with your goals relative to the quality of your inputs.

Expand full comment