Regarding building full, verified models of airplanes in Simulink, two things that came up in conversations at CDC in Milan:
- Most designs in aerospace or in automotive industry are incremental and piecemeal, building on top of previous models, not starting from scratch. Sometimes the patches are beneficial or neutral, but sometimes they lead to catastrophic failure (case in point: MCAS in Boeing's 737 MAX).
- During a panel on control architectures, Alberto Sangiovanni-Vincentelli pointed out that, despite extensively documented requirements and specs, when he and collaborators tried to formally verify several highly complex designs from industry, they found multiple logical inconsistencies.
I wonder if you'd have any interest in the work of eg. Robert Rosen in mathematical biology; it's not like, predictive, but it's an attempt to do high-level mathematical modelling of organisms
I'm definitely fascinated by the differences between practices as perceived by the public in various industries and the realities "on the ground" / "in the trenches". That said, while systems-level engineering is certainly hard, engineered systems are in fact often successfully designed to be modular (the limitations of human cognition kind of require this to a large extent), whereas evolved systems are not. The whole reason the paper "Could a neuroscientist understand a microprocessor?" could be written at all was that the microprocessor in question could indeed be simulated in full detail. While modern microprocessors/ICs have more transistors than any human being could look at in a lifetime, these devices mostly work with extremely high reliability even though no human has poured over all the technical details involved. And critically, if you ever needed to go and click deeper and deeper into the layers of your VLSI design documents in Verilog, you could do so and understand what each component does down to the atomistic level without too much effort. As one issue, microprocessors have far less degeneracy, or the tendency for multiple components to perform highly but not entirely overlapping functions, than biological systems. Similarly my understanding is that FEA simulation tools, while not perfect, are certainly used extensively in industry and have been part of a noted trend in reductions in structural safety factors over the last century. This isn't to say that there aren't inconsistencies in systems-level designs.
Evolution on the other hand doesn't have clear design purposes. It is 99.9% random in its local iterations, and we do not necessarily have reason to expect that each component has a limited set of purposes that could be understood with a simple natural language description.
I think you should dive more into chaos theory, into how complex dynamical systems need to be before chaos can arise (not very), and into what the implications are for understanding biological systems that weren't designed by humans with limited working memory or who have a need to "see like a state."
A neuroscientist, Richard Gregory, frequently explained the problems with localizing brain function via lesion studies by comparing ablation experiments to understand a radio, (starting 1958). https://journals.sagepub.com/doi/abs/10.1068/p190561
"you’d be hard-pressed to create a working simulation of a car from component models of internal combustion engines, batteries, brakes, transmission, axles, and so on."
And yet, you can create an _actual_ car if you have these components -- a crucial difference to the situation in systems biology.
So I wonder if, perhaps, other than the fact that system-level "stuff" is hard, there is another angle here that has to do with modelling vs building. Intuitively (though maybe it's a bit of a stretch), this reminds me of the old debates in robotics and grounded intelligence, and the "the world is its own best model" approach:
"This hypothesis states that to build a system that is intelligent it is necessary to have its representations grounded in the physical world. Our experience with this approach is that once this commitment is made, the need for traditional symbolic representations soon fades entirely. The key observation is that **the world is its own best model**. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough."
(Brooks 1990, "Elephants don't play Chess", emphasis added; other texts in the very same spirit can be found in Flynn and Brooks "Battling Reality" 1989; and Brooks 1991 "Intelligence without representations")
I found systems level thinking in engineering also very useful for producing risk analysis for complex systems. For example, at an airport, what would happen if we implemented this type of fueling station at this location rather than over here? How much would it cost to add this type of fueling station? How much would it cost to go from natural gas airport to fully hydrogen airports (how much could we save in the long run in overall cost (including everything else that cascades afterwards into the future)? How much would it cost for X, Y, and Z scenarios.
Just proposing one scenario where the usefulness has been quite evident. The issue is that it can easily lead to false narratives for certain idealized situations, but I think that is the case for any type of analysis method.
Regarding building full, verified models of airplanes in Simulink, two things that came up in conversations at CDC in Milan:
- Most designs in aerospace or in automotive industry are incremental and piecemeal, building on top of previous models, not starting from scratch. Sometimes the patches are beneficial or neutral, but sometimes they lead to catastrophic failure (case in point: MCAS in Boeing's 737 MAX).
- During a panel on control architectures, Alberto Sangiovanni-Vincentelli pointed out that, despite extensively documented requirements and specs, when he and collaborators tried to formally verify several highly complex designs from industry, they found multiple logical inconsistencies.
I wonder if you'd have any interest in the work of eg. Robert Rosen in mathematical biology; it's not like, predictive, but it's an attempt to do high-level mathematical modelling of organisms
I'm definitely fascinated by the differences between practices as perceived by the public in various industries and the realities "on the ground" / "in the trenches". That said, while systems-level engineering is certainly hard, engineered systems are in fact often successfully designed to be modular (the limitations of human cognition kind of require this to a large extent), whereas evolved systems are not. The whole reason the paper "Could a neuroscientist understand a microprocessor?" could be written at all was that the microprocessor in question could indeed be simulated in full detail. While modern microprocessors/ICs have more transistors than any human being could look at in a lifetime, these devices mostly work with extremely high reliability even though no human has poured over all the technical details involved. And critically, if you ever needed to go and click deeper and deeper into the layers of your VLSI design documents in Verilog, you could do so and understand what each component does down to the atomistic level without too much effort. As one issue, microprocessors have far less degeneracy, or the tendency for multiple components to perform highly but not entirely overlapping functions, than biological systems. Similarly my understanding is that FEA simulation tools, while not perfect, are certainly used extensively in industry and have been part of a noted trend in reductions in structural safety factors over the last century. This isn't to say that there aren't inconsistencies in systems-level designs.
Evolution on the other hand doesn't have clear design purposes. It is 99.9% random in its local iterations, and we do not necessarily have reason to expect that each component has a limited set of purposes that could be understood with a simple natural language description.
I think you should dive more into chaos theory, into how complex dynamical systems need to be before chaos can arise (not very), and into what the implications are for understanding biological systems that weren't designed by humans with limited working memory or who have a need to "see like a state."
A neuroscientist, Richard Gregory, frequently explained the problems with localizing brain function via lesion studies by comparing ablation experiments to understand a radio, (starting 1958). https://journals.sagepub.com/doi/abs/10.1068/p190561
"you’d be hard-pressed to create a working simulation of a car from component models of internal combustion engines, batteries, brakes, transmission, axles, and so on."
And yet, you can create an _actual_ car if you have these components -- a crucial difference to the situation in systems biology.
So I wonder if, perhaps, other than the fact that system-level "stuff" is hard, there is another angle here that has to do with modelling vs building. Intuitively (though maybe it's a bit of a stretch), this reminds me of the old debates in robotics and grounded intelligence, and the "the world is its own best model" approach:
"This hypothesis states that to build a system that is intelligent it is necessary to have its representations grounded in the physical world. Our experience with this approach is that once this commitment is made, the need for traditional symbolic representations soon fades entirely. The key observation is that **the world is its own best model**. It is always exactly up to date. It always contains every detail there is to be known. The trick is to sense it appropriately and often enough."
(Brooks 1990, "Elephants don't play Chess", emphasis added; other texts in the very same spirit can be found in Flynn and Brooks "Battling Reality" 1989; and Brooks 1991 "Intelligence without representations")
I very much look forward to your posts on the topic. I am hoping you
+ will take big swings at Digital Twins.
+ address the use of neural generative modeling for use in things like weather forecasts.
I found systems level thinking in engineering also very useful for producing risk analysis for complex systems. For example, at an airport, what would happen if we implemented this type of fueling station at this location rather than over here? How much would it cost to add this type of fueling station? How much would it cost to go from natural gas airport to fully hydrogen airports (how much could we save in the long run in overall cost (including everything else that cascades afterwards into the future)? How much would it cost for X, Y, and Z scenarios.
Just proposing one scenario where the usefulness has been quite evident. The issue is that it can easily lead to false narratives for certain idealized situations, but I think that is the case for any type of analysis method.