In their paper “What is the Chance of an Earthquake?”, David Freedman and Philip Stark argue for a pragmatic view of probabilistic modeling
Probability is just a property of a mathematical model intended to describe some features of the natural world. For the model to be useful, it must be shown to be in good correspondence with the system it describes.
“Good correspondence” seems like a reasonable requirement. And it’s one that anyone invoking a stochastic view of the world should be forced to justify.
So let me give some concrete examples where I strongly advocate for the application probability
Games of chance like roulette, backgammon, and poker. Obviously, you have a strategic leg-up if you play these games assuming their chance element can be modeled by a random number generator.
Randomized Algorithms are my personal favorite example. Computers can generate bit streams that look random for all intents and purposes. You can use these random numbers to perform computational tasks that are impossible (or at least much harder) than if you require determinism. In randomized algorithms, the coder has full control over the random element. The world could be completely deterministic and randomized algorithms would remain a powerful tool.
I view Randomized Controlled Trials as a randomized algorithm that intervenes in the physical world. Again, since the experimenter controls the randomness, there are reasonable inferences you can make on the outcome of the experiment based on probability. (Shout out to my design-based inference friends.)
Noise modeling in physics. It’s hard to argue against statistical mechanics when it works. Having worked with signals for decades, I have seen with my own eyes 1/f noise, Poisson noise, and Gaussian noise. Weirder phenomena like shot noise and Johnson noise also have high correspondence with observations. And these noise models are undeniably useful for engineering.
I could probably add more to the list, but these are definitely my top 4. I tried to think of a few more examples but became less certain of them as I tried to type them up. Let me know in the comments what I should add to this list.
In The Dappled World, philosopher of science Nancy Cartwright argues that though physicists claim universal laws of nature, these laws only tightly hold in special situations. In controlled experiments, we can make precise predictions about motion, mass, and matter, but the exact extent of the applicability of fundamental laws diminishes as systems get more complex and complicated. This is why physicists think diatomic molecules have one too many atoms.
We need a dappled view of probability. No, let me say that differently, we have a dappled view of probability and we’d be less confused if we made it clear that there isn’t a cohesive notion of how probability manifests itself in the world.
There are the mathematical axioms of probability and measure theory. There is a "Frequentist" view probability concerning the outcome of repeatable experiments. There is the "Bayesian" probability concerning our subjective beliefs about outcomes. There are fractions of wholes, phrased as percentages. Each of these manifestations has a context where it helps to explain outcomes or quantify uncertainty. But switching between these viewpoints causes mostly confusion.
In future blogs, I’ll explore some of these loci of confusion.
Interesting post. I suppose one could argue that the Born rule in quantum physics has a special status when making a list like this. Also why is shot noise weirder than Poisson noise? Aren't they synonymous? Is cryptographic hashing considered a randomized algorithm? It is fully deterministic but generates something that largely behaves as if uniformly random (which itself is its highly useful deliverable).
only tangentially related but can you tell us if conformal prediction is the be all end all for UQ? feels like everything these days is conformal