Discussion about this post

User's avatar
Ludwig Schmidt's avatar

"That’s a fascinating feedback loop."

Could an optimistic perspective be that the overall algorithm the field takes is model predictive control? Specifically that the local models, i.e., what researchers study in papers, are simplified to the extent that they don't support making globally optimal decisions. But the models may still be good enough for making local improvements that incrementally lead to better outcomes overall. Of course, even under that optimistic interpretation, not every paper makes sense.

Manjari Narayan's avatar

TL; DR; I am a huge fan of ML and sequential design under performative settings. We need new instances of it more than ever in drug development and neurotech. It is most certainly not taken into account in evaluating any brain stimulation or neuromodulation trials.

> Analyzing a single step in a simple RCT reveals a surprising well of complexity and many headaches for the policymaker. It’s much easier to build up a framework for approving interventions than to imagine what will happen if those interventions are applied at a population scale.

This might be very much on the implementation side but well before Pearl developed transportability, people started to think about other kinds of non-ideal experiments that have greater external validity than RCTs. The high internal validity RCT is there to establish whether a benefit even exists, while pragmatic trials measure the real world performance.

https://rethinkingclinicaltrials.org/chapters/design/experimental-designs-and-randomization-schemes/experimental-designs-introduction/

We could have a philosophical discussion of why one might ever want to care about internal validity at all. Do ideal well controlled experiments matter when what one really cares about is real world performance? Nancy Cartwright's account of causality emphasizes the modularity of the real world. Some things just go together in nature and it doesn't make sense to ask what the causal effect of intervening on them separately even means. For therapeutic development, it makes sense to triangulate evidence across different research designs each of which trade off internal and external validity to cover some threat to the validity of the final scientific question.

> Fatalism assumes the absence of temporal dynamics. The meaning of treatments can’t change over time. It means your policy has no effect beyond the treatment of each unit in isolation. People will behave the same before you make a policy and after you make a policy. Most people who work on causal inference know none of this is true, of course. And any seasoned machine learning engineer knows this as well when maintaining systems to continually retrain their stable of prediction models.

In vaccine research, obesity, social network interventions, the violation of interference between units is very much a concern. Still generally an underrated topic https://arxiv.org/pdf/1403.1239

9 more comments...

No posts

Ready for more?