Could strengthen case studies too by pre-registering which patients you'll write studies for, to avoid the problem of (perhaps unconsciously) cherry-picking particularly vivid cases that support your preferred hypothesis.
I think case studies are considered poor evidence because most researchers chase tiny effects and indeed case studies are poor evidence for tiny effects. With n = 10 the effects will go in all directions and nothing will be concluded. With n = 2250, maybe you will get a publishable p-value. Even if the effect is only 1%, you can multiply that by the number of students in the US and argue the effect is very substantively significant.
In the case of vaccines, it's also perilous to have small n case study. What if no one gets COVID in the case study? But there's also a concern about effect size. I suppose effect size of penicillin >> effect size of vaccine >> effect size of social science experiments. I'm no expert but there seems to be inherent (large) randomness in effects of vaccines on outcomes. n = 10 is problematic.
But further, I think there’s an additional, more philosophical reason, why they are considered suspect (at least in social science). There are two kinds of case studies. On the one hand, some are done by empirical scholars in the “framework” or “world-view” of statistical, positive, science. These ones will be considered less suspect. People will say: if you have a large effect and it’s possible to do a case study, do a case study! Famously what is argued here: https://en.wikipedia.org/wiki/Designing_Social_Inquiry
Yes, I plan on pushing more on what makes a "good" case study. No particular path of inquiry is going to be purely "good."
Not to get too off course, but some things, in fact, are ideal for randomized trials. Vaccines would be my primary example. But it's worth thinking about why. In particuarl, vaccines are not therapeutics. They are preventatives. We might not be best suited evaluate therapeutics and preventatives with the same measurement device.
Speaking of synchronicity: my post from today was about the emergence of chemical engineering, and the first illustration was the incredible speed with which large-scale production of penicillin was set up during WWII. https://realizable.substack.com/p/scaling-up-penicillin
Wild that we write on the same thing at the same time. But antibiotics were among the greatest inventions of the 20th century, so it was bound to happen.
Could strengthen case studies too by pre-registering which patients you'll write studies for, to avoid the problem of (perhaps unconsciously) cherry-picking particularly vivid cases that support your preferred hypothesis.
I think case studies are considered poor evidence because most researchers chase tiny effects and indeed case studies are poor evidence for tiny effects. With n = 10 the effects will go in all directions and nothing will be concluded. With n = 2250, maybe you will get a publishable p-value. Even if the effect is only 1%, you can multiply that by the number of students in the US and argue the effect is very substantively significant.
https://twitter.com/jayvanbavel/status/1681719800450490368
In the case of vaccines, it's also perilous to have small n case study. What if no one gets COVID in the case study? But there's also a concern about effect size. I suppose effect size of penicillin >> effect size of vaccine >> effect size of social science experiments. I'm no expert but there seems to be inherent (large) randomness in effects of vaccines on outcomes. n = 10 is problematic.
But further, I think there’s an additional, more philosophical reason, why they are considered suspect (at least in social science). There are two kinds of case studies. On the one hand, some are done by empirical scholars in the “framework” or “world-view” of statistical, positive, science. These ones will be considered less suspect. People will say: if you have a large effect and it’s possible to do a case study, do a case study! Famously what is argued here: https://en.wikipedia.org/wiki/Designing_Social_Inquiry
But on the other hand, some case studies will be “hermeneutic” or “interpretative”. In this tradition: https://static1.squarespace.com/static/55c3972ee4b0632d3480491b/t/56eb3ad537013b8180b9159c/1458256600062/Taylor_InterpretationandtheSciencesofMan.pdf A case study to “make sense” of a phenomena. And many people who do science will simply roll eyes. I think that stuff is often fascinating, but indeed quite different from science. And in practice, I think, in social science, it’s not always clear if case studies are of kind 1 or 2.
Yes, I plan on pushing more on what makes a "good" case study. No particular path of inquiry is going to be purely "good."
Not to get too off course, but some things, in fact, are ideal for randomized trials. Vaccines would be my primary example. But it's worth thinking about why. In particuarl, vaccines are not therapeutics. They are preventatives. We might not be best suited evaluate therapeutics and preventatives with the same measurement device.
Speaking of synchronicity: my post from today was about the emergence of chemical engineering, and the first illustration was the incredible speed with which large-scale production of penicillin was set up during WWII. https://realizable.substack.com/p/scaling-up-penicillin
Wild that we write on the same thing at the same time. But antibiotics were among the greatest inventions of the 20th century, so it was bound to happen.
I was having root canal surgery this morning, so antibiotics were certainly on my mind.