# Complex Sensitivity Analysis

### How frequency domain control highlights robustness and uncertainty quantification.

What’s the upside of working with transfer functions? I’ve certainly found it easier to explain simple different equation models of dynamics, and off-the-shelf software can let you plan complex trajectories with those models. There must be some payoff to working with complex numbers and thinking about everything in terms of frequencies. Was it just the case that people in the 1940s were naive and didn’t have computers?

In his excellent survey Model Uncertainty and Robust Control, Karl Astrom argues that the frequency domain is powerful because it puts uncertainty and robustness analysis front and center. I have been harping about how it’s hard to know *which* uncertainties need to be quantified in many decision problems. Frequency domain control gives us some straightforward answers.

To see how, let’s again use the process control model from yesterday.

In this diagram, C is the control system, and P is the process we’re trying to control. The diagram also draws a reference signal we’d like the closed loop system to track, a disturbance signal that impacts the process, and a noise signal that corrupts the measurements.

Using algebra, we have computed that the reference to the output is

T is a function over frequencies. For each frequency present in the reference signal, T gives its amplification and phase shift. T is called the *complementary sensitivity function*. Another map we computed yesterday was the map from the noise to the output. This is

and is called the *sensitivity function*.

What sort of sensitivity are these functions measuring? In all of the expressions we’ve been computing, we’ve had a denominator 1+PC. Again, annoyingly, this is a complex number. If it equals zero, the closed loop system is *unstable*, unboundedly amplifying small signals.

The magnitude of 1+PC is equal to the inverse magnitude of the sensitivity function, S. The larger the sensitivity function, the closer the control design is to instability. The sensitivity function provides a margin of error for the closed loop system to remain stable. We want the sensitivity function to be small for all frequencies.

We can also directly consider uncertainty in our process model. If we’ve estimated a nominal model of the process P, how much uncertainty can we tolerate to ensure we won’t end up with an unstable feedback loop? Suppose that the process is really equal to P’. Then, doing a little algebra, we see that as long as

then 1+P’ cannot equal zero. The complementary sensitivity function thus provides a complementary notion of margin. At the frequencies where T is large, small *relative* changes in the process can drive a system near instability.

In an ideal world, we’d like T to be equal to 1 everywhere and S equal to zero everywhere. If this were the case, we’d perfectly track a reference signal and reject all noise in our measurements. If C was equal to infinity, that’s exactly what we’d get. But what about for realistic, non-infinite controllers? Is this still achievable? Since S+T=1, maybe this isn’t entirely out of the question. But S and T are complex numbers, so this equality doesn’t rule out S and T being large numbers of opposite signs.

It turns out that the ideal control design is impossible. In 1945, Hendrik Bode derived a remarkable integral. Suppose the open loop map from control input to process output, PC, is itself stable. That is, the open loop system amplifies all frequencies by some bounded (but potentially very large) amount. Then

Sorry, what? First of all, no one has ever explained to me how Bode figured this out. Why would he even think that he could compute this integral in the first place? For those of you who have experience with complex analysis, you’ll know that indefinite integrals over complex numbers have all sorts of wacky properties. Unlike the gritty expressions we are forced to derive in Calc 2, complex Integrals over weird curves end up equalling integer multiples of pi for some reason. But still, the integral of the log of the sensitivity? Why would you think that would be easy to calculate? I’ve looked at proofs of this Bode Integral Formula, and they are anything but intuitive. (And if some control theorist is out there and reading this and has an elementary proof of the Bode Integral Formula, please send it to me!)

Mathematical amazement aside, this integral tells us something remarkable about physical reality, too. We want S=0 everywhere, but this integral says this is impossible. If you have a set of frequencies where the sensitivity is less than 1, then there must be another set of frequencies where the sensitivity is more than 1. There is effectively a conservation law of sensitivity. If you design the system to be insensitive at low-frequencies, it will be sensitive at high frequencies.

Which high frequencies are up to you and your control design team. Maybe you can push the sensitivity into regimes you’ll never see when running your control system in practice. Maybe we can just ignore the Bode Integral as a mathematical curiosity. In the next post, I’ll give a couple of examples, courtesy of Gunter Stein, where ignoring the inherent tradeoffs in the Bode Integral led to catastrophe.

This blog series should really just be

"feedback is all you need"

If you really want to lose 90% of your readership, blog about Quantitative Feedback Theory! https://link.springer.com/referenceworkentry/10.1007/978-1-4471-5102-9_238-1