8 Comments
Mar 8Liked by Ben Recht

2 recent-ish counter examples from aerospace:

1. Reusable rockets landing (open loop unstable) common place

2. Use of unstable Lyapunov orbits as key components in several space missions (starting in 2004) as well as the orbit for our most advanced space telescope (James webb is on an unstable Halo orbit around unstable L2).

Expand full comment

A classic. I always mention this paper about the perils of unmodeled dynamics (with Gunter Stein the last author) in the first lecture of my adaptive control class: https://dspace.mit.edu/bitstream/handle/1721.1/2820/P-1240-15681714.pdf

Expand full comment

Found your blog recently and I’m loving it, this article is probably my favorite! I got into control theory and interfacing with the macro world through launch vehicle control, where things are inherently dangerous. And with increased interest in aerospace and so-called “hard tech” startups, I think the proportion of young students looking into this stuff will reach the old Apollo era levels!

Expand full comment

But modern engineers can use concepts like aleatory and epistemic uncertainty, plus RL. and has actual compute. Surely the Bode Integral is just a special case (*satire).

Expand full comment

Thanks for the video!

Expand full comment

I've been reading all these posts with much enthusiasm. Thanks for these posts. Some of us apply these ideas to power systems as well, where the network functions really well (aka meets demand and produces power) until they simply fail (instability takes over). It is amazing to me how much power engineers manage by using simple linear dynamical models (for short-time control). Hope you can connect some of this eventually to modern RL :).

Expand full comment

One underrated component of aviation being safer is the advancement in digital technology. The transition from analog cockpits to digital cockpits greatly improved pilot’s situational awareness of what the plane’s conditions are reducing the likelihood of error. However, one challenging aspect of this whole entire process was that making this transition was incredibly difficult, and as we all know, most engineering disasters are because of human error. The issue here is that the more you decide to automate stuff and close the loop, the more you leave out of the hands of a human operator. At this point, with more automation, there increases the risk of some non-zero probability of a false alarm. Now, it is a matter of balancing the probability of valid and false alarms, and the only means of fixing this is by finding some way for the pilot to corroborate the alarm. But doing that, ironically as the pilots complained in the 1980s and 1990s, just gave the pilots even more mental workload! OK, so the point of my rambling is that when you decide to close the loop using a machine, you introduce the likelihood of false alarms. When the introduction of false alarms is presented, the operator of a nuclear powerplant or a plane must be able to corroborate these false alarms. However, this dilemma is ironic, the whole point of closing the loop is to lighten the cognitive workload of the operator or pilot but sometimes it actually just gives them more mental workload! This is a design decision on deciding when or when you should not close the loop in which situations. This is when you have the unenviable task of interviewing all the pilots and synthesizing the transcript to identify factors so you can create a conceptual model that will help you identify when you should or shouldn’t close the loop using a machine.

Expand full comment