At the 1989 Conference on Decision and Control, Gunter Stein delivered a gripping plenary on the great responsibility of control engineering. After 50 years of Cold War success, control theory had matured into a rich mathematical discipline that was automating the technology of science fiction. Control engineering was developing breakthroughs in space exploration, aviation, and energy. But Stein argued that recent disasters should give the community pause. Understanding the dangers of uncertainty was critical to pushing forward the field and the progress it brought along.
I highly recommend watching this talk:
Its is entertaining both for the vintage academic kitsch (VHS tape! overhead projectors! that suit!) and Stein’s dry midwestern delivery. You can also read the paper here.
Stein had three main points, and they tie into the control spiel I’ve been belaboring this week.
Unstable systems are provably more difficult to control than stable ones.
Controllers for unstable systems are operationally critical.
Closed-loop systems with unstable components are only locally stable.
The first bullet derives from the Bode Integral I described earlier this week. An unstable system is one that will unboundedly amplify small inputs. Bode’s integral for unstable systems is not equal to zero, but instead equal to the sum of the amplification rates of the unstable modes. Read Stein’s paper for the full details. This means that control systems with unstable subsystems are necessarily more sensitive to disturbances than those assembled of purely stable systems. And the more unstable the open loop system is, the more sensitive the closed loop system is to unmodeled and uncertain dynamics. Working with unstable processes is provably harder than working with stable ones.
Stein’s second point is that the controllers for unstable or dangerous systems are operationally critical and must be engineered with the utmost care and precision. By the 1980s, these control systems were computerized, and hence code was also operationally critical. Your software better not check for updates during some complex control maneuver. Stein discusses the hardware and software redundancies put into aircraft to ensure safe operation. He describes how the original Airbus A320 had four redundant control circuits, built with two different brands of processors. He also discusses that the humans in the loop are part of the control system. If something goes wrong, and people lack the proper training to recover from errors, disasters can ensue.
The third observation is a reflection of our modeling. A significant limitation of frequency domain analysis is that it assumes control inputs can be arbitrarily large and fast. But any control system we build will have limited power (i.e., control authority), and must always operate within the limits of this power. What Stein means by locally stable is that controllers with bounded power can’t always steer an unstable system back to a stable regime. If the process is in a configuration that demands more authority than your controller can provide, the closed loop system will unboundedly grow and crash.
When engineers disregard Stein’s rules, disaster can and does happen. Stein highlights this with two examples. First, he describes failures in supersonic airplane technology. Aerospace engineers thought they could make more agile, faster, lighter planes by computerizing the control systems. These flight systems were open loop unstable, but with clever control design, engineers figured humans could handle them. To show how this wasn’t the case, Stein uses the Grumman X-29 which he worked on at Honeywell. Through carefully analyzing the Bode integral, he shows why the lofty performance goals were impossible. Ignoring these open loop instabilities led to embarrassing crashes of the JAS 39 Gripen. Given his analysis, Stein predicted the sensors and actuators in new high-performance aircraft would be prohibitively expensive.
The second example is perhaps more riveting. Stein described how the Chernobyl disaster of 1986 resulted from not respecting the unstable.
“Whether we choose to recognize it or not, control played a major role in that accident. The plant’s hardware did not fail. No valve hung up, no electronic box went dead, and no metallurgical flaw caused a critical part to break. Instead, the reactor control system systematically drove the plant into an operating condition from which there was no safe way to recover. This is true, at least, if we count the control system’s hardware, its human operators, and its operating policies as part of the system.”
If you watched the Chernobyl series on HBO, you’ll hear them talk about the “positive void coefficient” in the reactor design. This coefficient refers to how the power changes with respect to the amount of steam versus air in the plumbing of the reactor. The closed loop design was stable, and the void coefficient didn’t play a role in normal operating procedures. However, the positive coefficient meant the system was unstable in the open loop. Stein demonstrates, in one of the earliest explanations to an international audience, how this design led to unfathomable disaster.
Stein concludes his lecture with a call to action for control engineers.
“This reactor control application, as well as the airplane applications I talked about earlier, illustrate that society does indeed permit control engineers to operate dangerous systems. The number of such applications increases steadily. Not all of them have such dramatic consequences as Chernobyl, but they are dangerous nevertheless.”
What came of this call to action? Thirty-five years after Stein's address, are control theorists in charge of more dangerous applications? Undeniably, control systems are everywhere. One hundred years of innovations in feedback have created a hidden infrastructure that connects our world. In 1989, these systems seemed destined to connect us to the rest of the universe. Stein warned that if we wanted to push our engineering further, it would require great care, investment, and danger.
But a different interpretation might be, “It’s not worth it.” Control theory has its limits. Though it delivered more than we could have imagined, what if it delivered all it could? What would the world look like if we decided not to pursue dangerous control applications but instead chose to work with the ones we had and make them as safe as possible?
It would probably look like the world we live in. We have the same number of nuclear plants running today as in 1989. We are not sending more people into space. Commercial aviation is, on average, slower today than in 1989. We aren’t building supersonic commercial jets.
But aviation is much safer today. Deaths on American commercial aircraft are exceptionally rare. Flight prices are low, and air travel is more accessible than ever (though you’ll have to pay extra to avoid having to take off your shoes and belt). Control and feedback still underlie all aspects of the infrastructure of our hyperconnected world, but its aims are decidedly modest.
With the end of the Cold War, society steered in a sharply new direction in the 1990s. Intrepid engineers realized they could make a lot more money from the micro world than the macro world. The internet age roared in. And the smartest engineers decided it was safer and more profitable to convince people to click on ads than to design control systems for nuclear power plants or next generation air travel. Buggy code and annoying delays are acceptable when serving advertising interlaced with hot takes.
2 recent-ish counter examples from aerospace:
1. Reusable rockets landing (open loop unstable) common place
2. Use of unstable Lyapunov orbits as key components in several space missions (starting in 2004) as well as the orbit for our most advanced space telescope (James webb is on an unstable Halo orbit around unstable L2).
A classic. I always mention this paper about the perils of unmodeled dynamics (with Gunter Stein the last author) in the first lecture of my adaptive control class: https://dspace.mit.edu/bitstream/handle/1721.1/2820/P-1240-15681714.pdf