I’ve been making a big deal about how integral control is sufficient for homeostasis. Let me be a bit more precise by what I mean here and what I think are the interesting lessons from such an analysis.
Let’s say we have some black box system that regulates some signal y[t]. The system can be perturbed by a disturbance signal d[t]. For today, I’ll assume these signals are one dimensional, meaning that at every time y[t] is some real number and the same for d[t]. But if you are comfortable with linear algebra, the generalization to multidimensional signals follows from the same argument I’ll present.
The system is perfectly adapting if for any setting of the disturbance signal to a constant, y[t] converges to the same constant reference level R. That is, if we set d[t]=D where D is any number, y[t] converges to the same R. You should think of the constant disturbance values as defining a particular environment. For example, D could measure outdoor temperature and R body temperature. A system is perfectly adapting if it maintains a constant body temperature no matter how hot or cold it is outside. Perfect adaptation captures an essential aspect of systems adapting to an unknown future.
Last time, we saw that if a system always converges to a steady state and has an integral controller in its architecture, then the system is perfectly adapting. Today, I want to show that this is a necessary condition for linear systems. Any perfectly adapting linear system has an internal integrator of the regulated signal.
Let’s again start with a linear time-invariant dynamic system. In this case, all of the internal variables can be collected into a state vector x[t], and these vectors change in time according to the update rules
x[t+1] = A*x[t] + b*d[t]
y[t] = c*x[t]
where A is a constant matrix and b and c are constant vectors.
Suppose we know that this system is perfectly adapting. For any constant input d[t] = D, y[t] converges to a reference level R that is independent of D, and x[t] converges to some x(D) that is potentially a function of D. The internal states can converge to different places as long as the output converges to the reference.
The first thing to note is that the reference level in a linear system, R, must be zero. By the nature of the equations, x(D) is a linear function of D. This means the only way for R to equal c*x(D) for all D is if R=0.
The second thing I’ll show is that there is an integrator of y hanging out inside the state vector.1 Specifically, there is some linear combination of the state
s[t] = dot(v, x[t])
such that
s[t+1] = s[t] - y[t]
That is, the signal s[t] is the integral of y[t].
I claim that the integrator is
We need some linear algebra to prove this, but it’s not too bad. I’m going to put the proof below the fold below for anyone interested in verifying it.
The point of this derivation is that we can always represent an internal state of a perfectly adapting linear system as an integral of the regulated variable. Integral action is necessary and sufficient for perfect adaptation in linear systems. For nonlinear systems, an internal integrator is sufficient for perfect adaptation, and Khammash has a simple argument proving that integral action is necessary for all linearizations of perfectly adapting systems.
Why am I belaboring this point? First, the centrality of integral action points to necessary patterns in regulatory systems, and it can help us better understand how to build simple models of complex biology. But there are also repeated patterns that occur when you implement systems with integrators, and these patterns manifest themselves in biological systems. Next time I want to dive into some of these consequences of regulation.
OK, here’s a proof that s[t] is integrating y[t], with an excessive amount of detail. First, let’s check that s[t+1] = s[t] -y[t]. We can write out the definition and then plug in the update rule for the state:
Now if we collect all the terms multiplying the x[t] and d[t], we get
That first term is just the negative of y[t]:
To prove my claim, I need to show the second term is zero for all d[t]. This only happens if
To verify this, note that the steady state value of x satisfies the equation
Rearranging this expression gives us a formula for x(D):
As I mentioned already, the steady state is a linear function of the disturbance level. Since y[t] converges to zero, the product of this steady state and the vector c must equal zero and hence, we have derived our integrator.
To my control theory readers, this is a long but elementary way of proving that the closed loop map has a zero at 1.
There's an old-school (frequency-domain) derivation of this via system type (number of poles at the origin of the open-loop transfer function assuming unity feedback). You need system type of at least 1 in order to track constant references and reject constant disturbances.
In the footnote, that'd be a _pole_ at 1, right?