Synthesizing a holistic view of organisms from component parts has remained a century-long challenge for systems biology. The body is comprised of organs, comprised of tissues, comprised of cells, comprised of proteins, and we can often build models of each of these components in isolation. Simple homeostatic loops with relatively simple dynamics are abundant. We can develop plausible mechanisms for them and small models that capture their behavior. But plugging them together into a coherent, predictive model has remained insurmountable.
When I was a postdoc, John Doyle introduced me to Arthur Guyton’s model of the circulation system. Guyton pieced together what was known about the dynamics of circulation, including fluid dynamics, neural signaling, and hormonal expression, to yield a monster model of the entire circulation system.
Guyton’s model led to rethinking many aspects of circulation, particularly how the influence of salt on blood pressure. But were these inferences legitimate? Over the decades, as computers became more accessible, biologists implemented and tuned Guyton’s model to see if it could make quantitative predictions about bodily responses to stressors. The hummod system built on the foundations of Guyton’s modeling claims to provide “The most complete, mathematical model of human physiology ever created.”
Since computers are now sophisticated enough to generate predictions from these monster models, they can make testable predictions. And there’s now sufficient evidence that they might not make useful predictions. For example, Kurtz et al. compared the predictions from hummod to actual data of humans changing their salt intake and found that hummod was just wrong. Not only were the quantitative predictions outside the ranges observed in people, but the qualitative shapes of the dynamics went the wrong way. For example, In Figure 1, the model predicts the eventual decrease of sodium retention on a high-salt diet, but no such decrease is observed in human subjects.
The Guyton model is wrong and barely usable. Sorry George Box. It might be unfixable. But why is it wrong? Certainly, part of the issue is that interconnected systems often don’t behave the way their components predict. When discussing perfect adaptation, I derived properties of homeostatic loops without knowing their precise parameters. However, you might need to know those parameters when you start to connect loops together. Two homeostatic loops in feedback, where one regulated signal negatively interferes with another, can result in a variety of surprising behaviors.
Control theorists belabor that you can have two systems that look nearly identical when isolated but have completely different behavior when connected as part of a larger system. I’ve discussed this conundrum a bit on the blog. Feedback loops yield unintuitive behavior: small delays in open loop behavior lead to cascading errors in closed loop. Huge gains in open loop get squashed in closed loop. What happens when you cascade lots of feedback systems together? Even if you understand the qualitative behavior of the individual components, tiny quantitative differences might lead to incorrect predictions at scale.
Perhaps the issue is that systems biology approaches whole-body modeling the wrong way. The popular dismissive analogy is that integrative physiology is like trying to understand a computer by smashing it into bits and staring at the components you found inside. Twenty years ago, this meant that biological tools couldn’t fix radios. A decade ago, it meant that neuroscience tools couldn’t understand a microprocessor. But if we wanted to be fair, we would ask if an electrical engineer could cure hypertension. Maybe I should write that paper because the answer is no.
Engineers have many of the same problems as the biologists. We can be very good at understanding how simple single-input-single-output feedback systems work. I can prattle on all day about integrator windup and oscillation in a two-state differential equation model. But what happens when you take a bunch of these systems and couple them together for a larger scale purpose? Our theories are far more muddled than we let on at the full system level.
The promise of systems engineering is that if you have proper abstraction boundaries and can guarantee things about inputs and outputs, you can assemble components into systems that always work and never crash. Engineers tend to point to microprocessors and airplanes as the success stories. We like to tell ourselves that our engineered artifacts are built this way, but when you start to dig into the details, it becomes harder and harder to figure out how true that is. Is modern engineering closer to principled systems verification theory or naive, iterative biological evolution? Can we simulate and provably certify large-scale systems to the precision of reality, or do we just find stuff that works and incrementally patch legacy systems together?
The answer is somewhere in the middle, of course. But I write this to cut the biologists some slack. Our bodies are more complicated than any engineering system we’ve ever built. And yet, you’d be hard-pressed to create a working simulation of a car from component models of internal combustion engines, batteries, brakes, transmission, axles, and so on. It was not so long ago that “sim2real,” where machine learning models could be trained in simulation to control robots in the real world, was presented as the fundamental challenge in robotics.
Control theorists have told me that aerospace companies build full, verified models of their airplanes in Mathwork’s Simulink.1 But we don’t get to fly airplanes because a simulation pans out. A whole other set of physical tests and design principles is involved in understanding how to put these pieces together.
I’m stuck on what to make of this, so I am again foolishly using the blog to think out loud. Systems-level thinking certainly has a place in biology. Medical therapies based on homeostatic principles, whether for diabetes or muscular atrophy, are undeniably effective. But at what level can we make predictions?
In the new year, I hope to dig deeper into this question about monster modeling in natural and engineered systems. Where is it effective? How well do our abstraction boundaries work, and in which contexts do they work? Full-stack engineering is far more cluttered and weird than we make it out to be, and I hope to flesh out some of its nuance.
If you are a control engineer reading this who has seen one, I’d love a pointer!
Regarding building full, verified models of airplanes in Simulink, two things that came up in conversations at CDC in Milan:
- Most designs in aerospace or in automotive industry are incremental and piecemeal, building on top of previous models, not starting from scratch. Sometimes the patches are beneficial or neutral, but sometimes they lead to catastrophic failure (case in point: MCAS in Boeing's 737 MAX).
- During a panel on control architectures, Alberto Sangiovanni-Vincentelli pointed out that, despite extensively documented requirements and specs, when he and collaborators tried to formally verify several highly complex designs from industry, they found multiple logical inconsistencies.
I wonder if you'd have any interest in the work of eg. Robert Rosen in mathematical biology; it's not like, predictive, but it's an attempt to do high-level mathematical modelling of organisms