Maybe I am too control-pilled (and I am), but my view is that statistical procedures produce open-loop decision rules, and any claim of inferential validity can only be made once the loop is closed by connecting the rules to the environment they were meant for.
IIRC, it was found that about 30% of medical papers used teh wrong statistics. Whether that was from ignorance or p-hacking, I don't know. What does bother me is that so much work rests on small samples, e.g., 3 treatments and 3 controls. The distributions are assumed Gaussian, and a t-test is used. This seems unwarranted.
> On the other hand, epidemiologists got way over their skis with cohort studies. Everyone wants to find the new intervention that’s a cigarette...
Totally agree, but I think this is more a case of Sturgeon's Law than it is about the field of statistics. When you go and actually read a lot of these papers they seem pretty shoddy. You often notice right away that there's a high risk for confounding and there's absolutely no chance they have rich enough data to robustly control for it. The model often seems so simplified and unrealistic that the interesting part is thinking up some mechanisms the study ignores that could have generated the result. Then there are some really high quality observational studies that still have some sort of story attached, but the story holds up to a lot of scrutiny -- the argument "well maybe you just got a 1 in 100 outcome" starts to feel as strong as any others.
Calling inference "the weakest link" of statistics seems sorta bold, because all the core foundational work is framed in terms of inference, the most central debate is Frequentists and Bayesians arguing about inferential paradigms, etc... but I guess what you're saying is on a practical level it's the part we're all the worst at? I'd say that's kind of normal and expected, the majority of people use statistical methods during the course of other work and our own goals and biases and pressure to get a result are constantly getting in the way.
Maybe I am too control-pilled (and I am), but my view is that statistical procedures produce open-loop decision rules, and any claim of inferential validity can only be made once the loop is closed by connecting the rules to the environment they were meant for.
yeah, it's pretty simple, right? You don't need to know what a Nyquist plot is to see the necessity of iteration and feedback.
IIRC, it was found that about 30% of medical papers used teh wrong statistics. Whether that was from ignorance or p-hacking, I don't know. What does bother me is that so much work rests on small samples, e.g., 3 treatments and 3 controls. The distributions are assumed Gaussian, and a t-test is used. This seems unwarranted.
> On the other hand, epidemiologists got way over their skis with cohort studies. Everyone wants to find the new intervention that’s a cigarette...
Totally agree, but I think this is more a case of Sturgeon's Law than it is about the field of statistics. When you go and actually read a lot of these papers they seem pretty shoddy. You often notice right away that there's a high risk for confounding and there's absolutely no chance they have rich enough data to robustly control for it. The model often seems so simplified and unrealistic that the interesting part is thinking up some mechanisms the study ignores that could have generated the result. Then there are some really high quality observational studies that still have some sort of story attached, but the story holds up to a lot of scrutiny -- the argument "well maybe you just got a 1 in 100 outcome" starts to feel as strong as any others.
Calling inference "the weakest link" of statistics seems sorta bold, because all the core foundational work is framed in terms of inference, the most central debate is Frequentists and Bayesians arguing about inferential paradigms, etc... but I guess what you're saying is on a practical level it's the part we're all the worst at? I'd say that's kind of normal and expected, the majority of people use statistical methods during the course of other work and our own goals and biases and pressure to get a result are constantly getting in the way.