15 Comments
User's avatar
Lior Fox's avatar

Well, as for the last point about experiments, you know what they say in physics: if your experiment needs a statistician, you need a better experiment [attributed to Rutherford but I have no idea if that's a true quote]

And I'm joining the comments before me in unshamefully claiming that if this series is heading to physics land, StatMech also deserves some attention [if only to materialize my prediction about the self-averaging argmin post...]

Expand full comment
Ben Recht's avatar

What 1920s stat mech should I cover? I was thinking about talking about Wiener processes. What else did you have in mind?

The 1800s is a little overly tread, and I am not sure I have much to add there yet.

Expand full comment
Lior Fox's avatar

Tend to agree about the 1800s.

I'm not sure, actually, and this is well beyond my area of expertise...

After all the disclaimers: The Ising model is an important piece of 1920s StatMech but afaik the closer relations with the questions at hand were only developed later. If you're willing to stretch the timeframe to the early 30s, then ergodic theory might be a good candidate (there's a nice historical review here: https://www.pnas.org/doi/full/10.1073/pnas.1421798112)

Expand full comment
Lior Fox's avatar

Oh and also, re Wiener, here's a nice quote from an interview of Jack Cowan (from the book "Talking Nets", I've shared quite a lot from that book on Twitter back then, and the Cowan interview is particularly good):

> [Wiener] said to me, "You know, the reason I got all my results on what are now called the Wiener-Banach spaces and stochastic processes is because of my strong physical intuition." He said he got it from looking at all the ripples and antipatterns in the Charles River.

Expand full comment
Aman Desai's avatar

Thank you for another awesome article, Professor! Apologies if this is a silly question, but how often do we need to consider the effect of measurement upon a system in non-QM statistical contexts? In most cases, do we assume that the measurement doesn't affect the results of the parameters/data we are estimating?

Expand full comment
Ben Recht's avatar

At a high level: there are all sorts of measurements that we only know how to do destructively. For example, we have to kill animals to do histology. Certain microscopes are necessarily destructive. The point is that *some* interaction needs to happen to measure. Some interactions are totally destructive, some are nearly passive. In quantum mechanics, all interactions are destructive, and this is confusing. But it's not a coincidence that quantum mechanics is only predictive about the infinitesimal.

Expand full comment
Alexandre Passos's avatar

I kinda get annoyed about this discussion of QM being focused on measurement as AFAICT the uncertainty thing doesn't come from measurement at all mostly it comes from the fact that quantities that are independent classically (time and energy, or position and momentum), are defined as fourier transforms of each other in QM, and if something is localized in frequency space it is not localized in position space, regardless of how you measure it (i.e. it's that the quantities are not defined, not that they are hard to measure, and it's the inability to define these things independently that is weird). But what do I know.

Expand full comment
Ben Recht's avatar

I agree, but let's clarify that there are multiple quantum phenomena associated with measurement. Yes, for any fourier transform, there is an uncertainty principle. But you also have issues of measurement, back action, and entanglement with simple, discrete two-state spin systems.

Expand full comment
Alexandre Passos's avatar

But the spin issue is the same, right? We think of vertical and horizontal as orthogonal directions, but in some spin systems they have nonzero dot product so the wavefunction cannot independently assign probabilities to them (in the same way that a function and its fft are not independently specifiable).

Expand full comment
Ben Recht's avatar

I see what you're saying but still see them as slightly different. I know you can argue you can't simultaneously measure a spin in the x and z directions simultaneously, but what's the classical system that can be represented as a 2 dimensional complex vector?

Expand full comment
Alexandre Passos's avatar

Yes, that's a good point, the real twisty thing is the form of the wavefunction where different observables are eigenstates of different operators and eigenstates of different operators are not necessarily orthogonal. We could probably model some oscillators as 2d complex vectors in state space but it wouldn't have the weird confusion between operators that we have otherwise in QM.

Expand full comment
Aman Desai's avatar

This is really interesting. Apologies if this is a silly question, but is this one of the reasons why Laplace's Demon wouldn't be able to predict the behavior of a QM system, even if it knew the exact initial states?

Expand full comment
rvenkat's avatar

An old paper by Shalizi and Moore, _What is a Macrostate?_(https://arxiv.org/abs/cond-mat/0303625) seems relevant in this conversation. I am curious if control theoretic perspectives can build upon the arguments of that paper and add to the discussion here.

Expand full comment
Alex's avatar

"The impossibility of measurement highlights the dirty secret of physics."

This reminds me of Hasok Chang's book "Inventing Temperature." There is always this race between theory - using models to say estimate the temperature of the sun - which themselves are validated from tools that have obviously never "directly" measured the sun. Does the theory still apply at the sun?

There have been many cases where theories accord with the measurements at local levels, but they are contradicted at more macroscopic or microscopic levels - and only once we have better tooling are we able to confirm this.

The tooling also of course depends on the theory. You need to have a sense of what you are measuring - if you do get a thermometer near the sun, are you actually measuring temperature, or picking up something else? How would you know?

I have seen few books investigate the dialectic between theory and empiricism as well as Chang's.

Expand full comment
Alex's avatar

I think (though I am about as far away from physics as you can get) that your post concerns inescapable quantum uncertainty - i.e. the fundamental uncertainty which plops out of a mathematical model.

I believe there is also fundamental empirical uncertainty. When you make a measurement, you must control the experiment such that the conditions are as close as possible across multiple iterations.

ISO defines these "repeatability conditions" here in 0.3: https://www.iso.org/obp/ui/#iso:std:iso:5725:-1:ed-1:v1:en.

Even after controlling for all these conditions - we use the same measuring device from the same operator in the same lab during roughly the same time period - we will _still_ have variance in our measurements. This is perhaps self-evident - we can never of course control every single possible bit of variance involved in taking a measurement. And so, we will be left with irreducible uncertainty at the empirical level.

I write about this topic more here: https://alexpetralia.com/2023/01/31/what-does-it-mean-for-data-to-be-precise-part-4/#the-iso-definition-of-precision

Expand full comment