Loving the substack so far. The decision-centric perspective you discuss here, and contrasting it with other perspectives on experiments, is a big focus of Part I of the new Wiggins & Jones book. I think you posted about the book on twitter a few months, but if you haven't yet dug into it, do!
Maybe does not seem quite correct as a free-standing sentence (but ok in the context of the post). For example, things like Bayesian Optimization and similar sampling techniques have been deployed to guide scientific experiments into new regimes and discoveries.
but I guess you could argue it is some sort of rebranding of adaptive control.. no big non-niche example discovery off the top of my head, but still randomization & experimentation
Yes, this is calibration via optimization. Definitely good stuff! But this is not an RCT in the narrow way I'm thinking about it (assigning n units to control and treatment at random and computing effect sizes).
Instead, I'd turn it around: I think it's useful to think of RCTs as a very primitive form of randomized optimization.
Realization of input signals for system identification is another classic type of randomized measurement algorithm no? I know I may be drifting off-topic, but this one is significant in engineering and sometimes science (again calibration mainly, discovery not sure)..
1) I can think of no discoveries that were actually made by randomized controlled trials.
2) As a mechanism of measuring whether a treatment is as effective as its proponents claim, RCTs have been very valuable in medical regulatory frameworks. Their value in tech (through AB testing) is harder to gauge, but I've had many data scientists tell me that they use AB testing as a way to prevent overcommitting of bad code (as committing is how people get promoted).
As you and Erik both validly point out, I'm strawmanning this argument in this blog post. I think there's a weird idea behind what it means for RCTs to be a "gold standard." I like how Arthur Deaton puts it:
"there is movement in development economics towards the use of randomized controlled trials (RCTs) to accumulate credible knowledge of what works."
There's this idea that RCTs help us find what works, and perhaps then tells us what's true (hence discovery). I'll try to get into this more in future posts when I talk about observational causal methods.
Loving the substack so far. The decision-centric perspective you discuss here, and contrasting it with other perspectives on experiments, is a big focus of Part I of the new Wiggins & Jones book. I think you posted about the book on twitter a few months, but if you haven't yet dug into it, do!
> Randomized experiments are more effective as part of regulatory mechanisms than as instruments of scientific discoveries.
why?
Maybe does not seem quite correct as a free-standing sentence (but ok in the context of the post). For example, things like Bayesian Optimization and similar sampling techniques have been deployed to guide scientific experiments into new regimes and discoveries.
Do you have particular examples in mind here?
well I was thinking stuff like this: https://journals.aps.org/prl/abstract/10.1103/PhysRevLett.124.124801
but I guess you could argue it is some sort of rebranding of adaptive control.. no big non-niche example discovery off the top of my head, but still randomization & experimentation
Yes, this is calibration via optimization. Definitely good stuff! But this is not an RCT in the narrow way I'm thinking about it (assigning n units to control and treatment at random and computing effect sizes).
Instead, I'd turn it around: I think it's useful to think of RCTs as a very primitive form of randomized optimization.
Realization of input signals for system identification is another classic type of randomized measurement algorithm no? I know I may be drifting off-topic, but this one is significant in engineering and sometimes science (again calibration mainly, discovery not sure)..
In the context of systems and control, I recommend reading this paper by Marco Campi and the accompanying discussion:
paper: https://marco-campi.unibs.it/pdf-pszip/EJC-randomized-algorithms.pdf
discussion: https://homes.esat.kuleuven.be/~sistawww/smc/jwillems/Articles/JournalArticles/2010.2.pdf
It is. One of those fun cases where the definition for identification "persistence of excitation" is satisfied by random noise.
Two part answer:
1) I can think of no discoveries that were actually made by randomized controlled trials.
2) As a mechanism of measuring whether a treatment is as effective as its proponents claim, RCTs have been very valuable in medical regulatory frameworks. Their value in tech (through AB testing) is harder to gauge, but I've had many data scientists tell me that they use AB testing as a way to prevent overcommitting of bad code (as committing is how people get promoted).
sure, both make sense to me - is anyone actually claiming RCT -> discovery though? as opposed to RCT validates whether or not some claim is true?
As you and Erik both validly point out, I'm strawmanning this argument in this blog post. I think there's a weird idea behind what it means for RCTs to be a "gold standard." I like how Arthur Deaton puts it:
"there is movement in development economics towards the use of randomized controlled trials (RCTs) to accumulate credible knowledge of what works."
There's this idea that RCTs help us find what works, and perhaps then tells us what's true (hence discovery). I'll try to get into this more in future posts when I talk about observational causal methods.
"things got muddled in the 1930s" well that's one way to put it
LOL. Too tactful? Is part of the reason stats is such a mess today because Fisher and Neyman hated each other so much?