Discussion about this post

User's avatar
Lior Fox's avatar

This is fantastic.

btw have you read "Radical uncertainty" by Kay and King? I haven't read it myself yet (it's on the list...) but I got the recommendation from someone, in the context of similar/related discussions. If I got his summary right, I think one of the main claims in that book is against the idea that "quantifying" uncertainty about policy questions in some scientific/math way is always the way to go.

Expand full comment
Tracy Lightcap's avatar

Ben,

You might read Federalist 37 on this. Short Madison: expecting uniform decisions from people with diverse interests concerning difficult and complex issues is usually folly. Falling back on "our values" is highly unlikely to lead to anything approaching a disinterested decision.

Madison is right about this. Our "common sense" is almost never common or analytically sophisticated enough to be useful. (For the following, I'm doing a straight lift from Deborah Mayo.) That's why we find ourselves falling back on science. It has the only real possibility of reaching useful decisions about complex matters. Sure, there are human interests involved and controversy is central to the entire endeavor. But … we learn from the controversy; indeed, that's what science is about generally. Further, it is only by exploring unanswerable questions that we gain the capability to produce the measures and procedures we need to make the questions answerable.

Ok, Mayo off. But I think she's right and Meehl's wrong. On the whole idea of publishing too much and using techniques - particular - significance testing - incorrectly we're bo0th with you.

Expand full comment
15 more comments...

No posts