Discussion about this post

User's avatar
Liam Baldwin's avatar

> “A ‘valid’ preregistration plan necessitates knowing the outcome of all aspects of an experiment before conducting it. Preregistration makes it impossible to adapt to the actuality of experimental conditions.”

This is definitely a large drawback of preregistration, but isn’t it just a constraint introduced by frequentist assumptions (of which I recognize you’re generally skeptical)? It would be nice to learn-as-you-go from data, and adapt your methods appropriately, but this does render hypothesis tests useless/biased. Given that this seems to be what researchers are currently doing, a mechanism that weakly-enforces concordance with a priori design/hypothesizing seems reasonable.

How else should we get around this?

Expand full comment
Kevin Munger's avatar

Finally got a chance to read this one -- I love the framing. I'd only add the critique that the term validity is /binary/, and that this means that internal and external (temporal) validity are somewhat different things. From me and Drew's paper:

Translating this intuition to statistical practice within social science, we might say that an epistemic

community that puts unbiasedness over precision will get neither. Assumptions are unavoidable. Proceeding from this premise, we aim to reframe the discussion of extrapolation away from "validity" entirely. This word has the unfortunate implication of being binary; computer login passwords and driver’s licenses are either valid or invalid. To say that a password is “mostly valid" is to say that it is “not valid." Scientific knowledge is not binary, and while most practitioners can successfully keep this reality in mind when discussing “external validity," the term introduces unnecessary confusion.

https://osf.io/preprints/osf/nm7zr_v2

Expand full comment
8 more comments...

No posts