British polymath Frank Ramsey1 was one of the earliest to formulate rational utility maximization as stochastic optimization. In his 1926 paper “Truth and Probablity,” Ramsey developed probability theory as “the logic of partial belief and inconclusive argument.” In doing so, he concluded that rational actors should maximize expected outcomes subject to their personal, subjective probability distributions.
Ramsey was writing in response to the probabilistic framework of John Maynard Keynes, who aimed to pose probability as a tool for making sense of economic statistics. In Keynes’ 1921 Treatise on Probability, he proposed logical means of turning statistics, a series of tabulated counts of different quantities, into degrees of belief. Keynes attempted then to show that probabilistic reasoning was a rational means of propagating uncertain beliefs. I’ll come back to Keynes at some point on this blog, as I think he is more responsible than anyone for the human-as-utility-maximizer model. But Ramsey’s formulation was the spark which brought homo economicus to life.
Ramsey found several logical flaws in Keynes’ development and proposed a fix by appealing to empiricism. What would it mean to measure someone’s beliefs? How could we set up an experiment to get a (potentially noisy) measurement of what someone believes? Ramsey rejected that such beliefs can only be quantified through self-inspection. He viewed belief as no different than electrical current, which could be indirectly represented by the deflection of the pointer in a galvanometer. For Ramsey, it was totally fine to have noisy measurements of belief. He just wanted to show there was a device to measure it.
It is here where, through a thought experiment, we get stuck with our current model of rationality. First, Ramsey proceeded by assuming there must be a single number that we can assign to a degree of belief. He didn’t say why this should be univariate. This, to me, already seems like a fatal flaw. Undeterred, Ramsey pressed on, proposing that a degree of belief could be measured by presenting hypothetical scenarios to a person and asking them how they would react. Ramsey then makes a giant leap. Without citation or warning, he declares
“The old-established way of measuring a person's belief is to propose a bet, and see what are the lowest odds which he will accept. This method I regard as fundamentally sound; but it suffers from being insufficiently general, and from being necessarily inexact.”
I clearly have to do more digging because I don’t know where this idea of measuring people’s beliefs by proposing a bet comes from. Did Ramsey just mean this colloquially? It’s now common for arguments to devolve into aggressive queries of “How much do you want to bet?” Is that what he meant here? He just pushed on accepting that measurement of belief is equivalent to running DraftKings.
The thing is, once we accept Ramsey’s bookmaker model of measurement, the metaphorical die is literally cast. We have no choice but to interpret all existence as game of chance and all action as betting. It shouldn’t be surprising that probability becomes unavoidable if we model our lived experience as intermittent dealings with bookies. This gambling-centric view of existence is formalized in Ramsey’s famous “Dutch Book Argument.” If your belief system doesn’t obey the natural laws of probability, then there exists a bookie out there who can make money on you if you accept their bets. A hundred years of subsequent philosophy and economics grapples with the consequences of assuming life is lived in a casino.
If we accept the Dutch Book framing of the universe, we are trapped to accept that probability is the right way to reason about logical uncertainty. But the move from “this is how we measure people” in Ramsey 1926 to “Let’s imagine a hyperrational robot” in Jayes 70 years later is very telling. At least in this naive 1920s sense of rationality, we are now certain that People. Are. Not. Rational.
Since the work of Merril Flood at the RAND Corporation in the 1950s, research has shown time and time again that the utility-maximizing model of people is not predictive of what people do! People are, without a doubt, predictably irrational. But it’s much worse than that: people are also unpredictably irrational. Economic experiments aren’t replicable or predictive of the outcomes of other experiments. The utility monster model of humanity is not even wrong. It’s worth going back to the beginning. Recognizing what economic thinkers like Keynes and Ramsey were after. Ramsey is pretty clear in his foreword that he didn’t want this paper to be the only view:
“Probability is of fundamental importance not only in logic but also in statistical and physical science, and we cannot be sure beforehand that the most useful interpretation of it in logic will be appropriate in physics also. Indeed the general difference of opinion between statisticians who for the most part adopt the frequency theory of probability and logicians who mostly reject it renders it likely that the two schools are really discussing different things, and that the word 'probability' is used by logicians in one sense and by statisticians in another. The conclusions we shall come to as to the meaning of probability in logic must not, therefore, be taken as prejudging its meaning in physics.”
Ramsey argued for embracing the dappled world of probability. He was right. The singular path of utility maximization was a terrible model with terrible consequences.
But, hear me out here: we have other choices! It will take a lot of concerted effort to fix. It is undoubtedly worth the effort to try.
Frank Ramsey not only laid the foundations of the subjective theory of probability, but he also wrote the first English translation of Wittgenstein’s Tractatus Logico-Philosophicus, had an important branch of combinatorics named after him, and died at 26.
Nice post. Reminds me of Ian Hacking's own attempt to go back to the beginning in The Emergence of Probability. He is similarly non-dogmatic about the need to nail down a single interpretation of probability: "The seemingly equivocal idea of probability seems too deeply entrenched in our ways of thinking for mere linguistic legislation to sort things out. There are frequency-dogmatists who say that only one probability idea is right, or is useful, or scientific. There are belief-dogmatists who say the same thing for their approach. Fortunately, many scientific workers are more eclectic. Most people do not even notice the differences that are so hotly contested by specialists. That is a problem for philosophers who try to understand ideas, as much in 2005 as it was in 1975. Predictably it will be there in 2035 too...There is no point in going into denial, and saying there is really just one concept, or that the differences between the two sorts of idea can be smudged over.Why does probability face two ways, towards frequencies and towards degree of belief? And why are there dogmatists who insist that there is only one coherent way to face?"
I got baited by your pointer to Ramsey's essay and it's lack of citations, so went looking for pre-Ramsey notions of "lowest odds which he will accept". Nothing found, but here are some things I learned.
I started with the 1963 paper on the Becker-DeGroot-Marschak (BDM) method ("Measuring utility by a single-response sequential method"), which Ramsey's (much earlier!) comment reminds me of. BDM have only two cites, but jumping to Mosteller and Nogee's "An Experimental Measurement of Utility" (1951), they have a funny note about Ramsey:
> "The authors are grateful to Professors Armen Alchian and Donald C. Williams for calling their
attention to F. P. Ramsey's 1926 essay, "Truth and Probability" (especially the section on "Degree
of Belief," pp. 166-84), available in the reprinted Foundations of Mathematics and Other Logical
Essays (New York: Humanities Press, 1950). When the experiment began, we were not aware of
Ramsey's idea for measuring degree of belief and utility simultaneously.
There are a lot of cites to 1940s work, almost all post-dating vN-M (1944), but nothing ancient from what I can tell. Jumping back to the BDM paper, there's this comment on the first page:
> One such postulate (associated with the name of Fechner) specifies that, for a given subject, action A has a larger expected utility than action B if and only if, when forced to choose between A and B, the probability that he chooses A is larger than the probability that he chooses B. It follows that if a choice between A and B is made many times under identical conditions, the person will choose the action with the larger expected utility more than half of the time. If he is indifferent he will choose each action 50 per cent of the time.
> Mosteller and Nogee (1951), in what was perhaps the first laboratory measurement of utility, based their experiment on the Fechner postulate.
This invocation of Fecher appears to be alluding to the "Weber-Fechner law" from 1800s psychometrics. It's a comment about perception, but doesn't seem to connect the matter to bets from what I can tell. It's possible, but not at all established, that Ramsey was thinking of a "bet" based interpretation of perception from the psychometric literature of the 1800s. Do report back if you find anything.