Yeah, I am 100% with Meehl that there is no algorithm for converting vibes to probabilities. But there is an algorithm for revising the already given probabilities, and it relies on de Finetti's ideas of coherence (which can be reconciled with Kolmogorov's axioms: https://www.sciencedirect.com/science/article/pii/S0167715203003572). I like to think of the requirements of coherence as a potential field that enforces global constraints by exerting forces on local, possibly incoherent, probability assessments.

Ok, I have to clarify: As far as enforcing de Finetti coherence goes, the algorithm I had in mind is Jeffrey's conditionalization. There are some alternative proposals for probability dynamics though, e.g. by Ian Hacking.

Yeah, I gotcha. I'm not arguing there is anything wrong with conditionalization, but more that people often think there's a clean algorithm to compute any old probability using conditionalization, and it's freaking hard! My issue is less about the inference algorithm and more about the ad infinitim counterfactual modeling necessary to apply any inference algorithm. Or, in other words, I don't think that any algorithm can give a clean solution to the Duhem-Quine Problem.

Yeah, there can be no clean algorithmic solution to the Duhem-Quine problem. My point was that, while thereās no universal algorithm for coming up with initial probability assessment, one could conceive of algorithmic approaches to revising or updating this assessment as one accumulates observations.

Grant, S., A. Guerdjikova, and J. Quiggin. 2020. Ambiguity and Awareness: A Coherent Multiple Priors Model. The B.E. Journal of Theoretical Economics 0

Ambiguity in the ordinary language sense means that available information is open to multiple interpretations. We model this by assuming that individuals are unaware of some possibilities relevant to the outcome of their decisions and that multiple probabilities may arise over an individualās subjective state space depending on which of these possibilities are realized. We formalize a notion of <jats:italic>coherent</jats:italic> multiple priors and derive a representation result that with full awareness corresponds to the usual unique (Bayesian) prior but with less than full awareness generates multiple priors. When information is received with no change in awareness, each element of the set of priors is updated in the standard Bayesian fashion (that is, full Bayesian updating). An increase in awareness, however, leads to an expansion of the individualās subjective state and (in general) a contraction in the set of priors under consideration.

Really enjoying the series! In the paragraph about dumping data into a CSV and running logistic regression I think you meant to say "less than 5%" instead of "less than 95%" (the latter seems pretty easy to achieve).

Yeah, I am 100% with Meehl that there is no algorithm for converting vibes to probabilities. But there is an algorithm for revising the already given probabilities, and it relies on de Finetti's ideas of coherence (which can be reconciled with Kolmogorov's axioms: https://www.sciencedirect.com/science/article/pii/S0167715203003572). I like to think of the requirements of coherence as a potential field that enforces global constraints by exerting forces on local, possibly incoherent, probability assessments.

Which algorithm do you have in mind? I'm assuming not conditionalization.

No ā Iām talking about belief revision schemes, eg https://www.jstor.org/stable/pdf/2287313.pdf

That's conditionalization, no? Do they propose an alternative to Jeffrey?

Ok, I have to clarify: As far as enforcing de Finetti coherence goes, the algorithm I had in mind is Jeffrey's conditionalization. There are some alternative proposals for probability dynamics though, e.g. by Ian Hacking.

Yeah, I gotcha. I'm not arguing there is anything wrong with conditionalization, but more that people often think there's a clean algorithm to compute any old probability using conditionalization, and it's freaking hard! My issue is less about the inference algorithm and more about the ad infinitim counterfactual modeling necessary to apply any inference algorithm. Or, in other words, I don't think that any algorithm can give a clean solution to the Duhem-Quine Problem.

Yeah, there can be no clean algorithmic solution to the Duhem-Quine problem. My point was that, while thereās no universal algorithm for coming up with initial probability assessment, one could conceive of algorithmic approaches to revising or updating this assessment as one accumulates observations.

Jeffrey's rule is, but there are some generalizations and alternatives, as in here: https://personal.lse.ac.uk/list/PDF-files/BeliefRevision.pdf.

Here's my take on these issues

Grant, S., A. Guerdjikova, and J. Quiggin. 2020. Ambiguity and Awareness: A Coherent Multiple Priors Model. The B.E. Journal of Theoretical Economics 0

Ambiguity in the ordinary language sense means that available information is open to multiple interpretations. We model this by assuming that individuals are unaware of some possibilities relevant to the outcome of their decisions and that multiple probabilities may arise over an individualās subjective state space depending on which of these possibilities are realized. We formalize a notion of <jats:italic>coherent</jats:italic> multiple priors and derive a representation result that with full awareness corresponds to the usual unique (Bayesian) prior but with less than full awareness generates multiple priors. When information is received with no change in awareness, each element of the set of priors is updated in the standard Bayesian fashion (that is, full Bayesian updating). An increase in awareness, however, leads to an expansion of the individualās subjective state and (in general) a contraction in the set of priors under consideration.

Itās James R. Boen: https://m.startribune.com/obituaries/detail/10637318/?clmob=y&c=n

Wow, what a story! I found more here: http://www.dartmouth.org/classes/53/archives/JimBoen.php

Thank you!

Really enjoying the series! In the paragraph about dumping data into a CSV and running logistic regression I think you meant to say "less than 5%" instead of "less than 95%" (the latter seems pretty easy to achieve).

Hah, my bad! I fixed it.