In the comments of last Thursday’s post, Matt Hoffman replied at length, starting with:
“Every decision-making-under-uncertainty problem, like it or not, is a question of how to wager.”
Matt is not alone in thinking this, but the word “every” makes the statement untrue for me. I’m happy to embrace pluralism again and let us all have our own truths. But outside of the casino, my truth is that no decision-making problems are about gambling. In fact, one of the more pernicious aspects of our running national nightmare is the oligarchy that parasitically leeches money from people they convince to gamble.1
Since the future is unknown, every decision-making problem is made in the face of uncertainty. How we think about that uncertainty varies a lot. And the number of times we can cleanly make a decision-making problem into a gambling problem is… almost never. Indeed, I can’t think of any outside of gambling. Unless you are shackled to a blackjack table, decision making just isn’t about wagering! There is no reason that anyone need conceptualize their life as an endless string of cost-benefit analyses. But lots of people do. I’m not denying that they do. But it’s a problem.
Even investing doesn’t follow the cleanly derived utility maximizing rules of game-theory optimal gambling. Your portfolio manager is not making Kelly bets. As David Rothman pointed out in a comment, Kelly bets can be derived from beautifully simple theory, but no one uses them in practice. Instead, people at best run “fractional Kelly” rules to be even more risk-averse. While you can derive a lot of math explaining why fractional Kelly is “more optimal,” I haven’t seen any math that doesn’t tie itself into knots to justify deviating from Kelly’s criterion.
I got excited about forecast coherence because it motivated probabilistic thinking solely in terms of prediction. There need not be any financial stakes involved. You just have to commit to being scored and tabulated. If your beliefs will be scored—whether they be remunerated or not—convex geometry forces you into making probabilistic forecasts.
But even this derivation comes out of a contrived mathematical game where the assumptions have to line up just right (continuous proper scoring) for you to get a clean story. That’s fine! It’s an elegant way to motivate logical probability. Pedagogically, I should be able to teach Bayesian logical probability without leading with gambling. Forecast coherence is one of many ways to argue for the subadditivity axiom. I personally like forecast coherence more than the derivation through Dutch Books, and also prefer it to the arguments that appeal to ranking plausibility of statements and applying Cox’s theorem. But everyone has their own tastes.
If you think you will be judged on performance, and you need to forecast the plausibility of outcomes on a scale of 0 to 1, then you need to talk in the language of probabilities. But this pattern of mirroring normativity in the idiosyncrasies of computers is an epistemological trap recurrent in the information age:2
We collectively decide that we need to score predictions quantitatively.
Such quantification necessitates predictions being expressed in terms of probability.
We forget why we decided to score things in the first place and tell ourselves that probability is the only way to make predictions.
The ever presence of machines and their numbers convince us that these numbers are inescapable. I’m fine with motivating probabilistic thinking in terms of scoring predictions. This was how Shannon and Wiener motivated probability models as they formulated our modern conceptions of information. But people are not computers.
All of our decisions are made in the face of uncertainty. Almost none are plugged into a Brier score or rewarded with a lottery payout. It’s not gambling when we decide who to date. It’s not gambling to choose to do something with our kids instead of answering emails. It’s not gambling when we care for a sick loved one. These statements are so obvious they sound ridiculous when you say them out loud. And yet there’s a particular mindset shared amongst a very powerful group of people who want us to believe that we can make all our decisions by deferring to game-theoretic machine thinking. It would be funny if it weren’t so terrifying.
If you want to read a more fleshed out version of this argument, check out this review of Nate Silver’s book with Leif Weatherby.
Leif describes this trap as part of a larger dialectic of information systems.
Wow, I'm honored to have inspired a post!
I'm happy to accept many of the points you make here. Well, not exactly happy, since aesthetically I kind of like the idea of a world where maximizing expected utility under a probabilistic forecast _is_ the right way to make all decisions in practice. But I absolutely agree that we don't live in that world, mostly because:
1. Any model we can formally specify (let alone compute with) is going to be a gross oversimplification.
2. We don't actually have coherent utility functions.
IMO these are very, very strong arguments against treating "life as an endless string of cost-benefit analyses". In most situations, you can't do the computation, because it's not only intractable, it's ill-defined.
So I agree that the "mindset...that we can make all our decisions by deferring to game-theoretic machine thinking" is wrongheaded. (How problematic this is for society and why is a separate question that I don't want to argue.)
But my main claim was that “every decision-making-under-uncertainty problem, like it or not, is a question of how to wager”, and I stand by it. If you've got a decision to make, and there are uncertain consequences to that decision, and you've got a clear preference for some of those consequences over others, _you are making a bet whether or not you think about it that way_.
Often you can't solve the problem optimally, but that just means the problem is hard. Someone who lacks the working memory to count cards in an 8-deck shoe and plays blackjack anyway is still gambling. Deciding whether or not to take a potentially (but not definitely) life-saving drug despite the serious short-term side effects is a bet.
Sometimes the optimal solution is obvious, but that just means the problem is easy. Buying a lottery ticket is (usually) the wrong financial choice, but not playing is gambling that you wouldn't have won. Spending time with your kids instead of answering emails is (usually) the right life choice, but you're betting that it won't ultimately lead to you getting laid off (which would be bad for your whole family).
Sometimes you don't have a clear preference, but in that case maybe it's misleading to call it a decision "problem"—is it really a problem if a solution is impossible in principle?
Finally, sometimes _trying to solve a decision problem rationally incurs a prohibitive cost_! Maybe you'd be happier if you didn't try so hard to be rational! Maybe you enjoy playing poker casually, but find playing your best stressful, so even though you'd prefer to win it's better to play badly. Maybe you won't fall in love with _anyone_ if you insist on cost-benefitting everything to death, so even though if you thought about it clearly you'd probably be happier in the long run with down-to-earth Francis than exciting Taylor, blindly choosing Taylor is the only move you can make. In these situations the rational decision-theory framework breaks down, but you're still making a bet, even if you don't try to work out the odds.
Anyway, this is a lot of text for what is essentially a semantic argument (i.e., "what does the word 'gambling' mean?") so I'll stop there.
💯