There's an old-school (frequency-domain) derivation of this via system type (number of poles at the origin of the open-loop transfer function assuming unity feedback). You need system type of at least 1 in order to track constant references and reject constant disturbances.
Autoregressive models like y[t] + a_1 y[t-1] + ... + a_n y[t-n] = b_0 u[t] + ... + b_m u[t-m] should give you a way of introducing impulse responses without too much hassle. You can write it as (1 + a_1 S + ... + a_n S^n) y[t] = (b_0 + b_1 S + ... + b_m S^m) u[t], where S is the shift (or delay) operator (Su)[t] = u[t-1] and work with formal polynomials in S.
It's funny because in *open* loop, an integrator has a pole at z=1. But in these homeostatic systems, you detect the internal integrator by looking for a zero at z=1.
Indeed, with the frequency-view interpretation that the gain of the closed loop disturbance-to-output transfer function must go to zero as the frequency does - rejection of the constant disturbance, just as explained. I always liked the continuous-time analysis more than the z-transform discrete-time version anyway!
A decade-long challenge for me has been rethinking how to teach control ideas to computer scientists. I agree that the simple ideas of feedback systems are most elegantly described by continuous time-frequency domain theory. But then, what does that have to do with a robot that is running Q-learning in Pytorch?
I had a question about control systems and dynamical systems recently and this seems like a good place to bring it up: can any dynamical system with a stable attractor be recast as a control system? A control system whose “set point” (or perhaps not a single point but a subset of state space) is the attractor? Or are there essential features of a control system other than that the system reliably converges on a subset of state space, even when subject to disturbing inputs?
For me, control systems are simply dynamical systems with inputs that you can manipulate. In my posts so far on homeostatic systems, the manipulable input is the disturbance that adversarial perturbs the system off its setpoint.
Control system ideas can be used metaphorically to understand dynamical systems. You can also abstract the internal parts of the systems to be thought of as inputs. For example, you could think of the complex network to control blood calcium concentration, mediated by the parathyroid gland, as an engineered control system. In this case, the body "manipulates" the blood calcium level by adding calcium from the bone and intestine.
There's an old-school (frequency-domain) derivation of this via system type (number of poles at the origin of the open-loop transfer function assuming unity feedback). You need system type of at least 1 in order to track constant references and reject constant disturbances.
Yes, and it's beautiful and elegant. But I'm striving to develop a theory of control without s- or z-transforms. :)
On the other hand, I might have to talk about impulse responses to finish the argument I'm building here. I will spend the weekend thinking about it.
Autoregressive models like y[t] + a_1 y[t-1] + ... + a_n y[t-n] = b_0 u[t] + ... + b_m u[t-m] should give you a way of introducing impulse responses without too much hassle. You can write it as (1 + a_1 S + ... + a_n S^n) y[t] = (b_0 + b_1 S + ... + b_m S^m) u[t], where S is the shift (or delay) operator (Su)[t] = u[t-1] and work with formal polynomials in S.
Or: The output is a convolution between the input and the system. You know, like in a conv net.
lol
In the footnote, that'd be a _pole_ at 1, right?
No, it's a zero! C (zI-A)^{-1} B = 0 when z=1.
It's funny because in *open* loop, an integrator has a pole at z=1. But in these homeostatic systems, you detect the internal integrator by looking for a zero at z=1.
Ah, yes, of course! So rusty I fell into the classic mistake of mixing the open-loop and closed-loop transfer functions.
Happens to me all the time!
If you are into transfer functions, you can see how the pole becomes a zero when you close the loop:
An integrator is G(s) = 1/s.
Now connect the integrator in negative feedback to itself with gain K. Then the closed loop system has the transfer function:
H(s) = s/(s-K)
Indeed, with the frequency-view interpretation that the gain of the closed loop disturbance-to-output transfer function must go to zero as the frequency does - rejection of the constant disturbance, just as explained. I always liked the continuous-time analysis more than the z-transform discrete-time version anyway!
A decade-long challenge for me has been rethinking how to teach control ideas to computer scientists. I agree that the simple ideas of feedback systems are most elegantly described by continuous time-frequency domain theory. But then, what does that have to do with a robot that is running Q-learning in Pytorch?
I had a question about control systems and dynamical systems recently and this seems like a good place to bring it up: can any dynamical system with a stable attractor be recast as a control system? A control system whose “set point” (or perhaps not a single point but a subset of state space) is the attractor? Or are there essential features of a control system other than that the system reliably converges on a subset of state space, even when subject to disturbing inputs?
For me, control systems are simply dynamical systems with inputs that you can manipulate. In my posts so far on homeostatic systems, the manipulable input is the disturbance that adversarial perturbs the system off its setpoint.
Control system ideas can be used metaphorically to understand dynamical systems. You can also abstract the internal parts of the systems to be thought of as inputs. For example, you could think of the complex network to control blood calcium concentration, mediated by the parathyroid gland, as an engineered control system. In this case, the body "manipulates" the blood calcium level by adding calcium from the bone and intestine.