9 Comments

Did you know that estimating measurement error was indeed the reason Gauss originally introduced the "Gaussian" distribution in the context of astronomy? He postulated three properties of measurement error and derived it mathematically: https://www.maa.org/sites/default/files/pdf/upload_library/22/Allendoerfer/stahl96.pdf

Expand full comment

Oh wow, that's awesome. Also how wild is it that Galileo thought we should be using L1 instead of L2?

Expand full comment

And Tycho Brahe was the one who started averaging observations. I think the fact that the normal curve appears in many different ways (e.g., solutions to the heat equation) suggests that it is indeed more fundamental. On the other hand, we rarely use Gaussian in kernel machines now. Laplacian kernel is far more robust to the bandwidth parameter and probably should be the first choice. The rivalry goes on!

Expand full comment

If you haven’t you should read Deborah Mayo’s ‘Error & The Growth Of Experimental Knowledge’. I am completely convinced by her argument that ‘error statistics’ and not frequency statements is what Neyman-Pearson actually want. The point of a confidence interval is to measure the “ I have done extensive calibration and testing of my measurement device.” part, not the other part.

Expand full comment

One of the biggest problems with my stream-of-consciousness approach to these blogs is I'm not being scholarly at all. I apologize. I'm a big fan of Mayo's, and reading her work inspired a lot of today's work. Though I do often side with those she is critiquing.

And while I also agree that Neyman and Pearson were trying to be practical, so was Fisher! The problem is that neither camp got to a satisfactory axiomatic framework for "sciencing" and social conventions post World War II ended up running with an odd conflagration of both camp's ideas.

Thank you for commenting!

Expand full comment

"If we want higher precision, we need to build a better measurement device." But, we do averaging and integrating all the time to get higher precision.

Expand full comment

Yes! But here I’m taking the view that software is part of the device. In this framing, “take multiple measurements and average them” is changing the measurement device and making it better. I’ll try to be more explicit about this moving forward.

Expand full comment

Yes, this viewpoint needs to be developed further. For a while, there was a fairly robust research community around the theory of metrology as information processing: https://www.sciencedirect.com/science/article/pii/0263224194900388

Expand full comment

IMO, the community that does this best these days is Computational Imaging. I pine for the good ol' compressed sensing days...

Expand full comment