14 Comments
Dec 16, 2023·edited Dec 16, 2023Liked by Ben Recht

Nice post. Reminds me of Ian Hacking's own attempt to go back to the beginning in The Emergence of Probability. He is similarly non-dogmatic about the need to nail down a single interpretation of probability: "The seemingly equivocal idea of probability seems too deeply entrenched in our ways of thinking for mere linguistic legislation to sort things out. There are frequency-dogmatists who say that only one probability idea is right, or is useful, or scientific. There are belief-dogmatists who say the same thing for their approach. Fortunately, many scientific workers are more eclectic. Most people do not even notice the differences that are so hotly contested by specialists. That is a problem for philosophers who try to understand ideas, as much in 2005 as it was in 1975. Predictably it will be there in 2035 too...There is no point in going into denial, and saying there is really just one concept, or that the differences between the two sorts of idea can be smudged over.Why does probability face two ways, towards frequencies and towards degree of belief? And why are there dogmatists who insist that there is only one coherent way to face?"

Expand full comment
author

Hacking is so good.

Could you imagine if people fought about vectors the way they fight about probabilities? "How could we possibly associate the same mathematical object to the heading of a body in motion and the preferences of a person on the internet?" It is only once we take the vector and force it to sum to 1 that the interpretations becomes religious.

Expand full comment

Yes, religious is the right word. For a long time the Bayesianism vs Frequentism debate deeply confused me and I thought I was missing something subtle. Reading Hacking helped me see that the error is in mistaking these two frameworks for worldviews and/or seeing them as somehow exclusive of each other (the pick-a-side mentality).

Expand full comment

Yes! I wrote about some of these issues here: https://realizable.substack.com/p/against-mindless-tychism

When I finally get my act together, I’ll write more on this, specifically on Hacking and also on Henry Kyburg’s “Chance”.

Expand full comment

Wow what a quote from Kalman! Thanks for sharing.

Expand full comment
Dec 26, 2023Liked by Ben Recht

I got baited by your pointer to Ramsey's essay and it's lack of citations, so went looking for pre-Ramsey notions of "lowest odds which he will accept". Nothing found, but here are some things I learned.

I started with the 1963 paper on the Becker-DeGroot-Marschak (BDM) method ("Measuring utility by a single-response sequential method"), which Ramsey's (much earlier!) comment reminds me of. BDM have only two cites, but jumping to Mosteller and Nogee's "An Experimental Measurement of Utility" (1951), they have a funny note about Ramsey:

> "The authors are grateful to Professors Armen Alchian and Donald C. Williams for calling their

attention to F. P. Ramsey's 1926 essay, "Truth and Probability" (especially the section on "Degree

of Belief," pp. 166-84), available in the reprinted Foundations of Mathematics and Other Logical

Essays (New York: Humanities Press, 1950). When the experiment began, we were not aware of

Ramsey's idea for measuring degree of belief and utility simultaneously.

There are a lot of cites to 1940s work, almost all post-dating vN-M (1944), but nothing ancient from what I can tell. Jumping back to the BDM paper, there's this comment on the first page:

> One such postulate (associated with the name of Fechner) specifies that, for a given subject, action A has a larger expected utility than action B if and only if, when forced to choose between A and B, the probability that he chooses A is larger than the probability that he chooses B. It follows that if a choice between A and B is made many times under identical conditions, the person will choose the action with the larger expected utility more than half of the time. If he is indifferent he will choose each action 50 per cent of the time.

> Mosteller and Nogee (1951), in what was perhaps the first laboratory measurement of utility, based their experiment on the Fechner postulate.

This invocation of Fecher appears to be alluding to the "Weber-Fechner law" from 1800s psychometrics. It's a comment about perception, but doesn't seem to connect the matter to bets from what I can tell. It's possible, but not at all established, that Ramsey was thinking of a "bet" based interpretation of perception from the psychometric literature of the 1800s. Do report back if you find anything.

Expand full comment
author

This is great! Thanks, Johan! I love that you are on the case.

The Weber-Fechner law is just that our perception is generally logarithmic with respect to stimulus (e.g., loudness or brightness). In your reference here, is the reference to WF just is motivating logarithmic utility curves?

In any event, I'll keep looking too. Will keep you posted.

Expand full comment

I think the invocation of Fechner in Mosteller and Nogee (1951) -- which I came across because BDM cite it -- isn't so much about the parametric (logarithmic) relationship, so much as the monotonicity of choice probability in some notion of preference. I think that's all they're citing Fechner for. It's definitely a stretch to connect this cite of Fechner to Ramsey's notion of "measuring a person's belief by a bet". I am really curious where Ramsey is pulling that idea from! Maybe, maybe some idea like that exists in the psychometrics literature surrounding Fechner-Weber…

Expand full comment
Dec 18, 2023Liked by Ben Recht

"The thing is, once we accept Ramsey’s bookmaker model of measurement, the metaphorical die is literally cast. We have no choice but to interpret all existence as game of chance and all action as betting."

I'm not sure I follow you here - I've always thought of this method as an instrument to measure something specific (the subjective probabilities implicit in an individual's beliefs), without assumptions on how those beliefs relate to the rest of existence. Then again, subjective probabilities have never seemed like an interesting topic to me, so my intuitions on this topic are severely under-developed.

Expand full comment
author

Ramsey states that he is modeling people to "act in the way we think most likely to realize the objects of our desires, so that a person's actions are completely determined by his desires and opinions." And hence he thinks we can measure belief through actions.

I may be misunderstanding what you are saying. What is your interpretation of the proposed measurement of beliefs without relation to existence?

Expand full comment
Dec 16, 2023Liked by Ben Recht

I am not sure where this series is going, but I would kindly suggest to check some more recent work - and I mean 21st century - in this space

https://www.frontiersin.org/articles/10.3389/fnins.2011.00079

Expand full comment
author

I'll be honest: I also have no idea where this series is going. :)

I'm sure I'm destined to get to prospect theory in the near future. Thanks for sending the link. I'll give it a read.

Expand full comment
Dec 15, 2023Liked by Ben Recht

There is a lot to process here but I am vaguely stuck on died at 26! Rather tragic for someone with a such copious output. Thanks for this post and the history of games and probability that has been released this semester.

Expand full comment
Dec 17, 2023·edited Dec 17, 2023

Cheers for the honesty. I will keep tagging along for a little longer :)

> I'm sure I'm destined to get to prospect theory in the near future.

It's perfectly possible :)

I hope the paper is useful - you should be able to trace a connection with an empirical approach to understand "rationality" from the perspective of so-called "cognitive science" of which this paper is just a sample. Interestingly, they adopt many of the tools you find in Reinforcement Learning, for instance, but they're never considered to be the ultimate model, but just the "baseline model".

This paper of Giovanni (I know him as he was part of my PhD panel) is one where something very close to the notion of "judgement" in Prospect Theory (or my corrupt understanding of it) is grounded on actual measurements of (living) agents in a lab.

On a similar but mostly parallel line, the work of Chris Baker, Rebecca Saxe and Joshua Tenenbaum may be of interest

https://pubmed.ncbi.nlm.nih.gov/19729154/

Note that they use the keyword "inverse planning" rather than "inverse reinforcement learning" (which just had come out a few years before brought about in its REINFORCE-like formulation by Andrew Ng and Stuart Russell). There is a distinct emphasis made on the existence of some internal agent form of "deliberation" i.e. resolving the trade-offs computationally, before making a decision.

Edited: made a less general and more general assessment of current theory in economics :)

Expand full comment