5 Comments

I've been eagerly reading your lecture blogging series and these follow-ups on uncertainty, thanks for writing them! They've put into words a lot of the struggles I have had with probability and uncertainty over the past few years.

I started thinking about modelling uncertainty seriously about 6 years ago while working on a research program with military scientists. One of their areas of interest was subjective logic - where you reason with "subjective opinions" that quantify your subjective uncertainty in a point probability estimate. You can then do some maths to e.g. replace the point probabilities in a Bayesian network with a probability distribution at each vertex - the variance of the distribution representing your uncertainty in that probability estimate. The "subjective" component comes from how you originally specify this variance - you use a beta distribution and treat the two parameters as pseudo-counts of True/False outcomes (assuming a binary variable), so with no observations you just get a flat prior ("no opinion"). They had worked on ways to incorporate human subjective opinions into this framework, to allow you to combine e.g. field agent reports with sensor data to arrive at a probability for some variable with a reasonable uncertainty around this probability. All very interesting, but it just felt like kicking the can up the road - you still have to define a "subjective" opinion mathematically, and if you use the pseudo-count approach it seems to be equivalent to asking your subjective human what their frequentist estimate of a variable's probability is...

Another framework we talked about a lot was the Rumsfeldian "known knowns" etc. This seems like a pretty good way of critiquing probability theory used to model some kinds of uncertainty. Both "aleatoric" and "epistemic" uncertainty come under the "known unknowns" category (in subjective logic the aleatoric uncertainty would be your point probability and the epistemic would be the variance of the distribution around that point - defined by your prior and how much count data you observed). But how can you model "unknown unknowns" with probability?

Herbert Weisberg's "Willful Ignorance: the Mismeasure of Uncertainty" uses the framework of ambiguity versus doubt - where ambiguity could cover both known and unknown unknowns (at least in my interpretation). His thesis is that probability theory was developed for situations of "doubt" (eg dice games with quantifiable states), but that most situations where it is now applied are rather "ambiguous", and probability is not necessarily suitable for these situations. Unfortunately the book needs a good editing and firmer conclusion, otherwise I'm sure it would be more frequently cited in these kinds of discussions.

I've just joined <large famous tech company> and am in the process of being indoctrinated into their processes for decision making within the organisation. Most business decisions have to be made quickly with little data, and are made effectively without any reference to probability estimates! Some of these decision making frameworks may be of interest in this discussion.

Anyway apologies for the long note, excited to read more of your thoughts on this!

Expand full comment

No need to apologize, and I'd love to hear more about what <large famous tech company> uses to make its decisions. I am planning to write about how organizations and individuals actually make decisions and how there's a chasm between (academic) theory and practice. I'm collecting anecdotes, so please let me know what you're willing to share.

And thanks for the link to Weisberg. I've been reading a lot of probabilistic criticism (there is a hundred years of dissent). I'm also interested in understanding why the critiques have no capture.

Plenty more to come. Thank you for reading.

Expand full comment

Will be happy to share once I have learned more - and seen the processes in action in different contexts

Expand full comment

It has been a while since I read Jaynes, but Isn't one of his underlying principles that nothing is actually random? I think his defense of comparability makes more sense if you accept his view that everything is deterministic so probability only represents epistemic uncertainty not some notion of true randomness.

Lots to argue with there, of course....

Expand full comment

Yes, and I like the way Jaynes always shows you can model thermodynamics in terms of information gaps. It's beautiful stuff and has lots of applications. There are tons of things I like about Jaynes. Maximum entropy is genius.

But what I'm hoping to do in the blogs before Winter Break is question some of the foundations of decision making in engineering. In particular, the arguments for the universality of subjective probability. And Jaynes isn't the only one to formulate a theory of subjective probability.

Anyway, more to come!

Expand full comment