Matters of Life and Death
Motivating stability analysis with homeostasis
When I was first trying to wrap my head around the subject, Sasha Megretski, in his ineffably Russian way, told me control theory was the study of death. Everything in control theory is about ensuring things “retvrn” to the origin. Most controls text books open this way, drowning you in dense formalism of matrices, rational functions, and integral operators that all more or less certifies a dynamical system will converge to zero. Control theorists call this “stability,” and it’s hard to argue that it’s not the core of the field.
But the hyperfocus on stability undersells what the mathematics initially set out to capture. “Zero” suggests an equilibrium where things will converge without effort. But the origin is almost never a “dead state” in control applications. Instead, zero refers to a steady state far from equilibrium requiring vast resources to maintain. Control theory studies of homeostasis, not heat death.
Homeostasis is the term physiologists and physicians use to describe the various biological processes inside an organism that work overtime to maintain constancy of physiological quantities. For example, our bodies maintain tightly regulated concentrations of oxygen, electrolytes, and glucose in our bloodstream. To keep our such constant constancy, many other systems are in constant action. Heart rate varies, hormones modulate, muscle fibers tear, neurons signal. Vast amounts of energy are consumed and transformed to keep everything working no matter what threatening conditions we might encounter.
Many of the core ideas of control theory are themselves inspired by the human body’s incredible system of homeostatic regulation. The interplay between control and physiology in the twentieth century deserves its own blog post (or five-volume book). Control theory has always been a biomimetic study of life, not death.
In that spirit, let me motivate stability with homeostasis. Let’s assume we have a system working to maintain a setpoint. The system is designed to keep some signal as close to constant as possible. I’ll call this hopefully constant signal the reguland. The system experiences exogenous disturbances that might change its course and disrupt the constancy of the reguland. The system has a state, a collection of variables that at each fixed time predicts the system’s future. The state and disturbance at a particular time determine the next state according to some rule:
next_state = F(state, disturbance)The reguland can be measured, and is a manifestation of the current system state. That is, we can write the value of reguland as a function of the current state.
reguland = G(state)For any constant value of the disturbance, we’d like conditions that guarantee the system settles to a state where the reguland equals a specified setpoint level. No matter what the disturbance, the system has to converge to the same value of the reguland, but this might require a different state value for every value of the disturbance.
The goal of control analysis is to find conditions on the maps F and G that guarantee such a steady state is possible and robust to further disturbances. One of the most basic analyses uses calculus. If we assume that F and G are differentiable, then the implicit function theorem guarantees there is a value of the state that maintains the setpoint. This state value is one determined by the value of the disturbance and can be computed from the derivatives of F and G.
These derivatives also tell us something about the system dynamics near the setpoint. If we start at a fixed point associated with a “normal” environmental disturbance, and nature slightly changes, we can approximate the convergence to the new fixed point using linearization. Linearization assumes the dynamics are well approximated by the linear model defined by the Taylor series approximations of F and G at the fixed point. From the linearization, we can derive properties of the derivative of F needed to guarantee that the system shifts to a new setpoint (e.g., the eigenvalues of the Jacobian matrix all need to have magnitude less than one). The idea of using static local information to inform temporal behavior is called Lyapunov’s direct method or Lyapunov’s first method. We transform the problem of general nonlinear control into one of local linear algebra. The linear algebra tells us interesting and surprising things that are generally actionable in engineering design. We just have to be careful to know the limits of these analyses.
One such interesting conclusion is that gradient descent is effectively necessary to maintain setpoints. Following the linear algebra, we can always rewrite the state in such a way that one of the components is the running average of the deviation of the reguland from its setpoint. That is, there is always a component of the state equal to the last value minus the current deviation:
x[new] = x[old] - set_point_deviationControl theorists call this integral control, and we’ll talk more about it next week. Integral control is an essential tool to maintain setpoints in control design. It turns out that it is in fact a necessary part of any setpoint regulation.1
While Lyapunov’s first method provides useful insights into the local behavior of complex nonlinear dynamics, using these local linearizations in practice relies heavily on the precise specification of the model. Incorporating model uncertainty in these analyses is not straightforward.2 Luckily for us, Lyapunov came up with a second method, an indirect method, that can help us analyze the behavior of less well specified systems. Lyapunov’s second method will be the subject of tomorrow’s post.
I’ll work out these mathematical details in class, and I’ll post a pdf with this derivation later today. I tried to write this out in substack equations, and it was just a bit more unwieldy than I wanted. One of my goals here is getting these arguments shorter and simpler, but this is still a work in progress.
At least not to me! YMMV.


Since this is liveblogging a course, I wonder if it would also be possible for you to post the exercises? Sure I could just pick up the control theory book on my desk and do exercises in there, but since I'm reading these notes, it would help to have the exercises be a bit more in sync.
One important concept that does not immediately follow from homeostatic considerations is dynamic equilibrium. While it is necessary to maintain fixed setpoints as a matter of life or death, life processes involve time variation, where certain temporal trajectories are viable while others are not.