Whereas my 11-year-old son loved the math behind the simple feedback amplifier, I tried to explain Laplace transforms to him yesterday, and we quickly gave up and made some bad techno instead. Classical control theory moves from fifth-grade math to advanced collegiate math in the first lecture. You need linear algebra. You need differential equations. You need Fourier transforms.

The fact that you need complex numbers already makes it a struggle, and that was the part where I got stuck in explaining linear systems to my kid. So I’m going to try to write this blog for him. But it’s also for control theorists. Though “robotics” are all the rage in modern AI research and development, classical control seems so far removed from the story. I think it still has valuable lessons, especially for problems in decision making. Yet, as you might have seen over the past couple of posts, I haven’t figured out how to explain these clearly. I’ve been trying for a decade. Flag this post as another attempt at explanation.

The main idea I failed to explain to my kid yesterday was frequency and phase. I described ideal amplifiers last week. In my model, every signal would get boosted by the same amplitude. But I also mentioned that the feedback amplifier would still work if the amplitude boost differed for different signals.

A way to think about these different amplitudes is in terms of signal frequency. I like to think about this in terms of audio signals. As you have likely experienced, every amplifier will have different gains for the lows, mids, and highs. Your music sounds different in the car than on your phone speaker. On your phone, the very low and very high frequencies are attenuated, not amplified.

But what does “frequency” really mean here? Most people understand pitch and graphic equalizers, but what’s the mathematics behind it? Individual frequencies are simple signals that repeat. By repeating some pattern, you’ll hear a tone. Signals that repeat very rapidly sound high-pitched (treble). Signals that repeat slowly sound low-pitched (bass). A linear amplifier amplifies each frequency independently.

The one aspect that’s harder to capture is that the amplifier also adds a phase shift to different frequencies. This phase slightly delays the output signal from the input signal. Mathematically, we can write the amplitude gain and phase shift together as a single complex number:

A(f) is the amplitude boost. p(f) phase shift. What’s hard to appreciate is that G(f) is the right number to use to describe what an amplifier does. This funny complex gain is sufficient to let you know the gain of the negative feedback amplifier for all frequencies. Namely, it’s the formula we worked out last week:

Once we combine amplitude and phase into this complex number, everything we did for the feedback amplifier can be treated like 5th grade algebra again. OK, maybe high school algebra since we’re using complex numbers.

Thinking in terms of frequencies makes amplifier design an algebra problem. And the applications go far beyond amplifiers. If I want to understand how a control system works, I might have this more complex diagram:

Here, C is the control system. P is the plant that we’re trying to control. There are a bunch of different signals in this pipeline. There is a reference signal we’d like the plant to track. There is a disturbance signal that impacts the plant. There is noise that corrupts our measurements.

The plant and control systems are complex dynamical systems, but if I assume they are linear, I need only think about how they amplify and shift different frequencies. I can write every mapping between any of the signals in the diagram in terms of the frequency responses of C and P. For example, the map from the disturbance to the plant output is

This is the same loop as we saw in the negative feedback amplifier. The map from the reference signal to the plant output is

The map from the measurement noise to the measured plant output is

With these formulae, we can think about how these different functions change with different frequencies. We can try to design controllers that make these functions look the way we’d like them to look. For computation, everything is the ordinary algebra of complex numbers.

The big conceptual leap is understanding frequency and phase and their representation as complex numbers. You can *hear* this stuff. I can open up my audio workstation and play these concepts for you. This is how I learned signal processing in the first place. I was an electronic music nerd and wanted to understand what oscillators and filters were doing. The tricky complex analysis sticks when you hear a concept in a song you like.

Let me use the rest of the week to describe more about what people learned about robustness and uncertainty through the lens of the frequency domain. It’s a powerful way to think about feedback. We’d be better off if we could explain this with simple difference equations, but let me set that goal by first explaining what we can do. And maybe you can help me make these explanations simpler.

FWIW, whenever I teach undergrad control, I always aim to emphasize one key fact: complex exponentials of the form exp(st), where s is complex, are eigenfunctions of linear time-invariant systems. The reason why you need s to be complex (as opposed to purely imaginary, which would only give you sinusoids) is that the corresponding set of signals (sinusoids with exponentially decaying or exponentially growing envelopes) is sufficiently rich to allow for things like system ID and for analyzing both transient and steady-state behavior.

Loving this series. Has me side-eyeing the spine of my Ogata textbooks from across the room.

Note sure if this fits in with your gameplan yet, but I'd be interested to hear how/if robust control fits into the modern optimization-industrial complex.