16 Comments
User's avatar
Maxim Raginsky's avatar

Yeah, I am 100% with Meehl that there is no algorithm for converting vibes to probabilities. But there is an algorithm for revising the already given probabilities, and it relies on de Finetti's ideas of coherence (which can be reconciled with Kolmogorov's axioms: https://www.sciencedirect.com/science/article/pii/S0167715203003572). I like to think of the requirements of coherence as a potential field that enforces global constraints by exerting forces on local, possibly incoherent, probability assessments.

Expand full comment
Ben Recht's avatar

Which algorithm do you have in mind? I'm assuming not conditionalization.

Expand full comment
Maxim Raginsky's avatar

No — I’m talking about belief revision schemes, eg https://www.jstor.org/stable/pdf/2287313.pdf

Expand full comment
Ben Recht's avatar

That's conditionalization, no? Do they propose an alternative to Jeffrey?

Expand full comment
Maxim Raginsky's avatar

Ok, I have to clarify: As far as enforcing de Finetti coherence goes, the algorithm I had in mind is Jeffrey's conditionalization. There are some alternative proposals for probability dynamics though, e.g. by Ian Hacking.

Expand full comment
Ben Recht's avatar

Yeah, I gotcha. I'm not arguing there is anything wrong with conditionalization, but more that people often think there's a clean algorithm to compute any old probability using conditionalization, and it's freaking hard! My issue is less about the inference algorithm and more about the ad infinitim counterfactual modeling necessary to apply any inference algorithm. Or, in other words, I don't think that any algorithm can give a clean solution to the Duhem-Quine Problem.

Expand full comment
Maxim Raginsky's avatar

Yeah, there can be no clean algorithmic solution to the Duhem-Quine problem. My point was that, while there’s no universal algorithm for coming up with initial probability assessment, one could conceive of algorithmic approaches to revising or updating this assessment as one accumulates observations.

Expand full comment
Maxim Raginsky's avatar

Jeffrey's rule is, but there are some generalizations and alternatives, as in here: https://personal.lse.ac.uk/list/PDF-files/BeliefRevision.pdf.

Expand full comment
John Quiggin's avatar

Here's my take on these issues

Grant, S., A. Guerdjikova, and J. Quiggin. 2020. Ambiguity and Awareness: A Coherent Multiple Priors Model. The B.E. Journal of Theoretical Economics 0

Ambiguity in the ordinary language sense means that available information is open to multiple interpretations. We model this by assuming that individuals are unaware of some possibilities relevant to the outcome of their decisions and that multiple probabilities may arise over an individual’s subjective state space depending on which of these possibilities are realized. We formalize a notion of <jats:italic>coherent</jats:italic> multiple priors and derive a representation result that with full awareness corresponds to the usual unique (Bayesian) prior but with less than full awareness generates multiple priors. When information is received with no change in awareness, each element of the set of priors is updated in the standard Bayesian fashion (that is, full Bayesian updating). An increase in awareness, however, leads to an expansion of the individual’s subjective state and (in general) a contraction in the set of priors under consideration.

Expand full comment
Maxim Raginsky's avatar

Wow, what a story! I found more here: http://www.dartmouth.org/classes/53/archives/JimBoen.php

Expand full comment
Ben Recht's avatar

Thank you!

Expand full comment
Thomas Lavastida's avatar

Really enjoying the series! In the paragraph about dumping data into a CSV and running logistic regression I think you meant to say "less than 5%" instead of "less than 95%" (the latter seems pretty easy to achieve).

Expand full comment
Ben Recht's avatar

Hah, my bad! I fixed it.

Expand full comment