Discussion about this post

User's avatar
Hostile Replicator's avatar

I've been eagerly reading your lecture blogging series and these follow-ups on uncertainty, thanks for writing them! They've put into words a lot of the struggles I have had with probability and uncertainty over the past few years.

I started thinking about modelling uncertainty seriously about 6 years ago while working on a research program with military scientists. One of their areas of interest was subjective logic - where you reason with "subjective opinions" that quantify your subjective uncertainty in a point probability estimate. You can then do some maths to e.g. replace the point probabilities in a Bayesian network with a probability distribution at each vertex - the variance of the distribution representing your uncertainty in that probability estimate. The "subjective" component comes from how you originally specify this variance - you use a beta distribution and treat the two parameters as pseudo-counts of True/False outcomes (assuming a binary variable), so with no observations you just get a flat prior ("no opinion"). They had worked on ways to incorporate human subjective opinions into this framework, to allow you to combine e.g. field agent reports with sensor data to arrive at a probability for some variable with a reasonable uncertainty around this probability. All very interesting, but it just felt like kicking the can up the road - you still have to define a "subjective" opinion mathematically, and if you use the pseudo-count approach it seems to be equivalent to asking your subjective human what their frequentist estimate of a variable's probability is...

Another framework we talked about a lot was the Rumsfeldian "known knowns" etc. This seems like a pretty good way of critiquing probability theory used to model some kinds of uncertainty. Both "aleatoric" and "epistemic" uncertainty come under the "known unknowns" category (in subjective logic the aleatoric uncertainty would be your point probability and the epistemic would be the variance of the distribution around that point - defined by your prior and how much count data you observed). But how can you model "unknown unknowns" with probability?

Herbert Weisberg's "Willful Ignorance: the Mismeasure of Uncertainty" uses the framework of ambiguity versus doubt - where ambiguity could cover both known and unknown unknowns (at least in my interpretation). His thesis is that probability theory was developed for situations of "doubt" (eg dice games with quantifiable states), but that most situations where it is now applied are rather "ambiguous", and probability is not necessarily suitable for these situations. Unfortunately the book needs a good editing and firmer conclusion, otherwise I'm sure it would be more frequently cited in these kinds of discussions.

I've just joined <large famous tech company> and am in the process of being indoctrinated into their processes for decision making within the organisation. Most business decisions have to be made quickly with little data, and are made effectively without any reference to probability estimates! Some of these decision making frameworks may be of interest in this discussion.

Anyway apologies for the long note, excited to read more of your thoughts on this!

Expand full comment
Jacob N Oppenheim's avatar

It has been a while since I read Jaynes, but Isn't one of his underlying principles that nothing is actually random? I think his defense of comparability makes more sense if you accept his view that everything is deterministic so probability only represents epistemic uncertainty not some notion of true randomness.

Lots to argue with there, of course....

Expand full comment
3 more comments...

No posts