I'm happy to accept many of the points you make here. Well, not exactly happy, since aesthetically I kind of like the idea of a world where maximizing expected utility under a probabilistic forecast _is_ the right way to make all decisions in practice. But I absolutely agree that we don't live in that world, mostly because:
1. Any model we can formally specify (let alone compute with) is going to be a gross oversimplification.
2. We don't actually have coherent utility functions.
IMO these are very, very strong arguments against treating "life as an endless string of cost-benefit analyses". In most situations, you can't do the computation, because it's not only intractable, it's ill-defined.
So I agree that the "mindset...that we can make all our decisions by deferring to game-theoretic machine thinking" is wrongheaded. (How problematic this is for society and why is a separate question that I don't want to argue.)
But my main claim was that “every decision-making-under-uncertainty problem, like it or not, is a question of how to wager”, and I stand by it. If you've got a decision to make, and there are uncertain consequences to that decision, and you've got a clear preference for some of those consequences over others, _you are making a bet whether or not you think about it that way_.
Often you can't solve the problem optimally, but that just means the problem is hard. Someone who lacks the working memory to count cards in an 8-deck shoe and plays blackjack anyway is still gambling. Deciding whether or not to take a potentially (but not definitely) life-saving drug despite the serious short-term side effects is a bet.
Sometimes the optimal solution is obvious, but that just means the problem is easy. Buying a lottery ticket is (usually) the wrong financial choice, but not playing is gambling that you wouldn't have won. Spending time with your kids instead of answering emails is (usually) the right life choice, but you're betting that it won't ultimately lead to you getting laid off (which would be bad for your whole family).
Sometimes you don't have a clear preference, but in that case maybe it's misleading to call it a decision "problem"—is it really a problem if a solution is impossible in principle?
Finally, sometimes _trying to solve a decision problem rationally incurs a prohibitive cost_! Maybe you'd be happier if you didn't try so hard to be rational! Maybe you enjoy playing poker casually, but find playing your best stressful, so even though you'd prefer to win it's better to play badly. Maybe you won't fall in love with _anyone_ if you insist on cost-benefitting everything to death, so even though if you thought about it clearly you'd probably be happier in the long run with down-to-earth Francis than exciting Taylor, blindly choosing Taylor is the only move you can make. In these situations the rational decision-theory framework breaks down, but you're still making a bet, even if you don't try to work out the odds.
Anyway, this is a lot of text for what is essentially a semantic argument (i.e., "what does the word 'gambling' mean?") so I'll stop there.
Agree completely. If those things aren't gambles, I don't see why.
I do think you can attack Bayesian theory from the point of view that it isn't completely possible to separate inference and decision making, but most of these attacks are also attacks on _any_ sort of formal analysis..
Perhaps the - no, An! ;) - inverse way of coming at what I think you are getting at here is encapsulated in this piece I’ve been reading over and over again of late:
There is some recent work in economics arguing this case (one economist who's been harping on this is Gerd Gigerenzer).
The book Radical Uncertainty also makes this case even for problems which are traditionally viewed as gambling (investing etc.) The authors ccriticize Kahneman and Tversky's experiments that purported to show how human behavior is "irrational".
There may be a theory yet to be discovered that shows how decision making in multi-agent societies have to be heuristic and non-mathematical, especially when all the systems are open and interrelated, and external noise does most of the work in the outcomes that transpire.
It seems like the objection is more to utility than probability. Of course part of the point is that they are dual to each other. But none of these articles are saying we should actively be using fuzzy logic or some alternative formalism instead of the good old Kolmogorov axioms. It’s more just, stop trying to quantify the value of choices.
For balance, I would like to briefly mention some of the strongest arguments for framing decision-making as bets:
- It can be useful for prompting you to clarify what you actually mean, what you are actually talking about, by forcing you to specify what your claim actually is, and what kind of evidence would count as proving or disproving it.
- It can also prompt you to be less confident and/or less falsely specific when using common linguistic tropes like "that would never happen" or "that's a one in a million chance".
- It can be a reminder that all decisions, all thinking and feeling, including the joy we feel when we spend time with our children, and the commitments we make to taking care of each other in times of need, is downstream from something *similar to* the operations of a vast gambling game. Our minds, our hearts, and the rest of our organs are, in a way, the results of strategies that paid off, and were therefore able to stick around. Even if we refuse to agree to be Dutch Booked by the casino police, we come from a long line of ancestors who had no choice, and were, sort of, the result of just such bookings. Sometimes it's nice to remember that.
One difference between the wagers of probability theory or Bayesian reasoning and actual decisions under uncertainty is that observer effects can exist. Real-life odds can be undefinable because they change according to your belief or actions. If you think or act as if the odds are 0.99, they can become 0.01 or vice versa. That's what "adverse selection" is in financial markets. That's the issue that Newcomb's Paradox explores. There may be no game theoretic equilibrium between your belief and how the odds change according to your belief, which lets you act "rationally".
There can be gambles you cannot "win" (in some sense) by applying probabilistic thinking because probability itself doesn't model the situation well.
I think it's pretty self-evident that whenever we make a decision, we opt for the one we think is best. By definition this "best" option is the one that maximizes our expected utility. In principle we could come up with a pair of functions quantifying our beliefs and utility, that corresponds to our actual behaviour. Therefore, I'd argue that all decision-making under uncertainty is utility maximization.
While that is fun to think about, in practice there's often little value to this as we don't actually know what the values of our utility function are. I could try to put some values on caring for a sick loved one, or going to the pub instead, but retrofitting utility to a situation is just circular reasoning. So we can still frame this as utility maximization, but we gain no value in terms more consistent reasoning or better decision making.
I think it's a good idea to put all reasoning in an abstract quantitative framework, but I find it really annoying when people use it when it's not really applicable. Like when EAs come up with some nonsense probabilities and utility of a future population of trillions to justify whatever comes to their mind. The reasoning part is okay, but all the input is garbage.
> Even investing doesn’t follow the cleanly derived utility maximizing rules of game-theory optimal gambling. Your portfolio manager is not making Kelly bets. As David Rothman pointed out in a comment, Kelly bets can be derived from beautifully simple theory, but no one uses them in practice. Instead, people at best run “fractional Kelly” rules to be even more risk-averse. While you can derive a lot of math explaining why fractional Kelly is “more optimal,” I haven’t seen any math that doesn’t tie itself into knots to justify deviating from Kelly’s criterion.
You don't need any math to justify deviating from Kelly's criterion. As David Rothman say's, apparently those portfolio managers value reducing risk more than maximizing return. Lot's of it just comes back to personal preference which influences the utility function.
Matt Hoffman’s quote reminds me of the old Wall Street saying: “There are no bad deals, just bad prices.” Both reflect the idea that decision-making under uncertainty is fundamentally about wagering. But like the price-focused view of deals, this framing treats all actions as transactional decisions, stripping away the human meaning, context, and personal experience that define them.
It’s not that wagering frameworks are useless - they can and do clarify decision-making in bounded, one-off scenarios. But real life isn’t a series of one-offs. It’s dynamic, full of feedback, and influenced by context. That’s why I’d argue that expected probabilities matter - but not as final answers. They’re inputs to a broader stochastic process that better reflects how our lives unfold.
Unlike utility-maximizing models, which assume fixed preferences and clean tradeoffs, stochastic models capture evolving states, shifting goals, and the way decisions recursively shape our futures. Our choices influence future states, and the variance of uncertainty itself evolves over time.
Models with states, transitions, and absorbing dynamics offer a richer way to think about long-term, evolving decisions. Gambling metaphors can help orient our thinking, but they can’t capture the real complexity we live with every day.
People treating decision-making or reasoning as gambling annoys me, because all "decisions" between one affordance and another take place against the background of which uncertainties we can get rid of by just controlling them online. An infant cannot make an "optimal decision" to move their leg in just the right way that gets them walking. First they have to learn to stabilize their posture when lying on their stomach, then when crawling, then when walking bipedally.
(This is also my theory for why reinforcement learning agents can take hundreds of thousands of training episodes to learn a simple motor task like walking: we're trying to have them learn optimal Q-functions when they should be learning to stabilize their posture and then move *between* stable postures.)
Wow, I'm honored to have inspired a post!
I'm happy to accept many of the points you make here. Well, not exactly happy, since aesthetically I kind of like the idea of a world where maximizing expected utility under a probabilistic forecast _is_ the right way to make all decisions in practice. But I absolutely agree that we don't live in that world, mostly because:
1. Any model we can formally specify (let alone compute with) is going to be a gross oversimplification.
2. We don't actually have coherent utility functions.
IMO these are very, very strong arguments against treating "life as an endless string of cost-benefit analyses". In most situations, you can't do the computation, because it's not only intractable, it's ill-defined.
So I agree that the "mindset...that we can make all our decisions by deferring to game-theoretic machine thinking" is wrongheaded. (How problematic this is for society and why is a separate question that I don't want to argue.)
But my main claim was that “every decision-making-under-uncertainty problem, like it or not, is a question of how to wager”, and I stand by it. If you've got a decision to make, and there are uncertain consequences to that decision, and you've got a clear preference for some of those consequences over others, _you are making a bet whether or not you think about it that way_.
Often you can't solve the problem optimally, but that just means the problem is hard. Someone who lacks the working memory to count cards in an 8-deck shoe and plays blackjack anyway is still gambling. Deciding whether or not to take a potentially (but not definitely) life-saving drug despite the serious short-term side effects is a bet.
Sometimes the optimal solution is obvious, but that just means the problem is easy. Buying a lottery ticket is (usually) the wrong financial choice, but not playing is gambling that you wouldn't have won. Spending time with your kids instead of answering emails is (usually) the right life choice, but you're betting that it won't ultimately lead to you getting laid off (which would be bad for your whole family).
Sometimes you don't have a clear preference, but in that case maybe it's misleading to call it a decision "problem"—is it really a problem if a solution is impossible in principle?
Finally, sometimes _trying to solve a decision problem rationally incurs a prohibitive cost_! Maybe you'd be happier if you didn't try so hard to be rational! Maybe you enjoy playing poker casually, but find playing your best stressful, so even though you'd prefer to win it's better to play badly. Maybe you won't fall in love with _anyone_ if you insist on cost-benefitting everything to death, so even though if you thought about it clearly you'd probably be happier in the long run with down-to-earth Francis than exciting Taylor, blindly choosing Taylor is the only move you can make. In these situations the rational decision-theory framework breaks down, but you're still making a bet, even if you don't try to work out the odds.
Anyway, this is a lot of text for what is essentially a semantic argument (i.e., "what does the word 'gambling' mean?") so I'll stop there.
Agree completely. If those things aren't gambles, I don't see why.
I do think you can attack Bayesian theory from the point of view that it isn't completely possible to separate inference and decision making, but most of these attacks are also attacks on _any_ sort of formal analysis..
💯
Thanks for this, Ben! It’s great.
Perhaps the - no, An! ;) - inverse way of coming at what I think you are getting at here is encapsulated in this piece I’ve been reading over and over again of late:
https://kelly.medium.com/and-not-or-ab3c3a2d3b74
I mean, if everything is “And”, mathematical principles can’t really apply, right?!
seems similar to the argument that people are "rational actors"
I would like to argue for a broader notion of rationality.
There is some recent work in economics arguing this case (one economist who's been harping on this is Gerd Gigerenzer).
The book Radical Uncertainty also makes this case even for problems which are traditionally viewed as gambling (investing etc.) The authors ccriticize Kahneman and Tversky's experiments that purported to show how human behavior is "irrational".
There may be a theory yet to be discovered that shows how decision making in multi-agent societies have to be heuristic and non-mathematical, especially when all the systems are open and interrelated, and external noise does most of the work in the outcomes that transpire.
It seems like the objection is more to utility than probability. Of course part of the point is that they are dual to each other. But none of these articles are saying we should actively be using fuzzy logic or some alternative formalism instead of the good old Kolmogorov axioms. It’s more just, stop trying to quantify the value of choices.
Well said!
For balance, I would like to briefly mention some of the strongest arguments for framing decision-making as bets:
- It can be useful for prompting you to clarify what you actually mean, what you are actually talking about, by forcing you to specify what your claim actually is, and what kind of evidence would count as proving or disproving it.
- It can also prompt you to be less confident and/or less falsely specific when using common linguistic tropes like "that would never happen" or "that's a one in a million chance".
- It can be a reminder that all decisions, all thinking and feeling, including the joy we feel when we spend time with our children, and the commitments we make to taking care of each other in times of need, is downstream from something *similar to* the operations of a vast gambling game. Our minds, our hearts, and the rest of our organs are, in a way, the results of strategies that paid off, and were therefore able to stick around. Even if we refuse to agree to be Dutch Booked by the casino police, we come from a long line of ancestors who had no choice, and were, sort of, the result of just such bookings. Sometimes it's nice to remember that.
- It's fun.
One difference between the wagers of probability theory or Bayesian reasoning and actual decisions under uncertainty is that observer effects can exist. Real-life odds can be undefinable because they change according to your belief or actions. If you think or act as if the odds are 0.99, they can become 0.01 or vice versa. That's what "adverse selection" is in financial markets. That's the issue that Newcomb's Paradox explores. There may be no game theoretic equilibrium between your belief and how the odds change according to your belief, which lets you act "rationally".
There can be gambles you cannot "win" (in some sense) by applying probabilistic thinking because probability itself doesn't model the situation well.
Very nice post. Thanks!
I think it's pretty self-evident that whenever we make a decision, we opt for the one we think is best. By definition this "best" option is the one that maximizes our expected utility. In principle we could come up with a pair of functions quantifying our beliefs and utility, that corresponds to our actual behaviour. Therefore, I'd argue that all decision-making under uncertainty is utility maximization.
While that is fun to think about, in practice there's often little value to this as we don't actually know what the values of our utility function are. I could try to put some values on caring for a sick loved one, or going to the pub instead, but retrofitting utility to a situation is just circular reasoning. So we can still frame this as utility maximization, but we gain no value in terms more consistent reasoning or better decision making.
I think it's a good idea to put all reasoning in an abstract quantitative framework, but I find it really annoying when people use it when it's not really applicable. Like when EAs come up with some nonsense probabilities and utility of a future population of trillions to justify whatever comes to their mind. The reasoning part is okay, but all the input is garbage.
> Even investing doesn’t follow the cleanly derived utility maximizing rules of game-theory optimal gambling. Your portfolio manager is not making Kelly bets. As David Rothman pointed out in a comment, Kelly bets can be derived from beautifully simple theory, but no one uses them in practice. Instead, people at best run “fractional Kelly” rules to be even more risk-averse. While you can derive a lot of math explaining why fractional Kelly is “more optimal,” I haven’t seen any math that doesn’t tie itself into knots to justify deviating from Kelly’s criterion.
You don't need any math to justify deviating from Kelly's criterion. As David Rothman say's, apparently those portfolio managers value reducing risk more than maximizing return. Lot's of it just comes back to personal preference which influences the utility function.
>By definition this "best" option is the one that maximizes our expected utility.
It is this definition of best that people disagree with.
Matt Hoffman’s quote reminds me of the old Wall Street saying: “There are no bad deals, just bad prices.” Both reflect the idea that decision-making under uncertainty is fundamentally about wagering. But like the price-focused view of deals, this framing treats all actions as transactional decisions, stripping away the human meaning, context, and personal experience that define them.
It’s not that wagering frameworks are useless - they can and do clarify decision-making in bounded, one-off scenarios. But real life isn’t a series of one-offs. It’s dynamic, full of feedback, and influenced by context. That’s why I’d argue that expected probabilities matter - but not as final answers. They’re inputs to a broader stochastic process that better reflects how our lives unfold.
Unlike utility-maximizing models, which assume fixed preferences and clean tradeoffs, stochastic models capture evolving states, shifting goals, and the way decisions recursively shape our futures. Our choices influence future states, and the variance of uncertainty itself evolves over time.
Models with states, transitions, and absorbing dynamics offer a richer way to think about long-term, evolving decisions. Gambling metaphors can help orient our thinking, but they can’t capture the real complexity we live with every day.
People treating decision-making or reasoning as gambling annoys me, because all "decisions" between one affordance and another take place against the background of which uncertainties we can get rid of by just controlling them online. An infant cannot make an "optimal decision" to move their leg in just the right way that gets them walking. First they have to learn to stabilize their posture when lying on their stomach, then when crawling, then when walking bipedally.
(This is also my theory for why reinforcement learning agents can take hundreds of thousands of training episodes to learn a simple motor task like walking: we're trying to have them learn optimal Q-functions when they should be learning to stabilize their posture and then move *between* stable postures.)