Discussion about this post

User's avatar
Matt Hoffman's avatar

Wow, I'm honored to have inspired a post!

I'm happy to accept many of the points you make here. Well, not exactly happy, since aesthetically I kind of like the idea of a world where maximizing expected utility under a probabilistic forecast _is_ the right way to make all decisions in practice. But I absolutely agree that we don't live in that world, mostly because:

1. Any model we can formally specify (let alone compute with) is going to be a gross oversimplification.

2. We don't actually have coherent utility functions.

IMO these are very, very strong arguments against treating "life as an endless string of cost-benefit analyses". In most situations, you can't do the computation, because it's not only intractable, it's ill-defined.

So I agree that the "mindset...that we can make all our decisions by deferring to game-theoretic machine thinking" is wrongheaded. (How problematic this is for society and why is a separate question that I don't want to argue.)

But my main claim was that “every decision-making-under-uncertainty problem, like it or not, is a question of how to wager”, and I stand by it. If you've got a decision to make, and there are uncertain consequences to that decision, and you've got a clear preference for some of those consequences over others, _you are making a bet whether or not you think about it that way_.

Often you can't solve the problem optimally, but that just means the problem is hard. Someone who lacks the working memory to count cards in an 8-deck shoe and plays blackjack anyway is still gambling. Deciding whether or not to take a potentially (but not definitely) life-saving drug despite the serious short-term side effects is a bet.

Sometimes the optimal solution is obvious, but that just means the problem is easy. Buying a lottery ticket is (usually) the wrong financial choice, but not playing is gambling that you wouldn't have won. Spending time with your kids instead of answering emails is (usually) the right life choice, but you're betting that it won't ultimately lead to you getting laid off (which would be bad for your whole family).

Sometimes you don't have a clear preference, but in that case maybe it's misleading to call it a decision "problem"—is it really a problem if a solution is impossible in principle?

Finally, sometimes _trying to solve a decision problem rationally incurs a prohibitive cost_! Maybe you'd be happier if you didn't try so hard to be rational! Maybe you enjoy playing poker casually, but find playing your best stressful, so even though you'd prefer to win it's better to play badly. Maybe you won't fall in love with _anyone_ if you insist on cost-benefitting everything to death, so even though if you thought about it clearly you'd probably be happier in the long run with down-to-earth Francis than exciting Taylor, blindly choosing Taylor is the only move you can make. In these situations the rational decision-theory framework breaks down, but you're still making a bet, even if you don't try to work out the odds.

Anyway, this is a lot of text for what is essentially a semantic argument (i.e., "what does the word 'gambling' mean?") so I'll stop there.

Expand full comment
John Quiggin's avatar

💯

Expand full comment
12 more comments...

No posts