Discussion about this post

User's avatar
Liam Baldwin's avatar

> “A ‘valid’ preregistration plan necessitates knowing the outcome of all aspects of an experiment before conducting it. Preregistration makes it impossible to adapt to the actuality of experimental conditions.”

This is definitely a large drawback of preregistration, but isn’t it just a constraint introduced by frequentist assumptions (of which I recognize you’re generally skeptical)? It would be nice to learn-as-you-go from data, and adapt your methods appropriately, but this does render hypothesis tests useless/biased. Given that this seems to be what researchers are currently doing, a mechanism that weakly-enforces concordance with a priori design/hypothesizing seems reasonable.

How else should we get around this?

Expand full comment
Sam's avatar

Thanks for sharing this—insightful read! I would be curious to hear your positions on the recent influx of AI scientist systems like Sakana, Intology, and AutoScience working on near end-to-end automation of scientific discovery and paper writing for AI venues. What validity concerns do you have about these systems? Are there parts of the scientific research, peer review, and dissemination process where these systems might actually enhance internal, external, or construct validity evaluations?

Expand full comment
3 more comments...

No posts