Yes and no - I saw you palm that card. By invoking the assumption that the system could be treated as linear, you assumed away the hard part of the general problem. Instead, suppose your plant is some big, complicated nonlinear causal system, and the initial condition of the system, when we first turn on the feedback loop, is nowhere close to where we want it to be - not in the linearizable close neighborhood of the desired end condition, and not even in the same basin of attraction. Let's assume for the moment that we do have good-enough causal computer model of the system, suitable for use in a model predictive control loop, but that model is not remotely close to being invertible, and there is no guarantee that there exists any possible state trajectory, under control, that will get us to the desired end condition, or anywhere very close to it, but we still want to find a way to get onto a trajectory that will get us as close as possible to it.
This is the general problem that I've been thinking about a lot lately, because it comes up in some of the most important "wicked" roblems confronting us (meaning humanity as a whole), such as how to get onto a trajectory that will carry us safely past the ongoing polycrisis/metacrisis, and into a long-term "protopian" future.
Beyond the particular application to the polycrisis, I really would appreciate seeing classical control situated within modern control, to justify the separation into high-frequency stabilization of linear time-invariant systems with ergodic disturbances and... all the crap we apply MPC or RL to.
I feel that the complexity we are facing is mostly due to the complicating, non-linear and highly dynamic factor of the system (or parts of it) fighting back against the protopian target due to diverging individual targets.
So whichever control rule you come up with, parts of the system will find ways to game it and thwart it to their purposes. See Goodhart's Law.
A great example of this is the German tax law. It tries to go for fairness and has grown so unbelievably complicated over time, but the most greedy and able participants still find enough loop-holes to counteract the intended purpose of fair taxation. Or Formula One.
Does not mean that linear control theory can not give us good ideas and a basic understanding, but I think the main problem for the protopian future is not controlling the complex system itself but aligning the goals for each sub-system (which you can ofc do with control loops).
I think you are thinking about the right issues, but I would suggest taking a somewhat different perspective: whenever we have human beings in the loop, i.e. intelligent agents that can and do define their own objectives and act on them, if we try to force them to perform a function that is not in close alignment with their self-defined objectives, they won't like it, so they will, as you say, try to find some way to game the system, thwart those externally imposed objectives, and instead pursue their own objectives.
But now let's consider what would happen if we came at the problem from the oppositie direction: instead of trying to force the people in the loop to do what we want, we could instead design the system so as to help each of our human users advance their own self-defined aims more efficiently and effectively, providing the models and mechanisms needed to provide each user with model-based decision support, thus enabling them to explore the different predicted consequences of different possible course of action open to them, and then choose among them (that's basically what Daniel Kahneman calls "System 2 thinking", i.e. slow, logical, and deliberative), and then, over time, as they find they encounter certain kinds of decision-making tasks over and over again, they will likely find that they can *automate* some of their more routine and/or less consequential decision-making, at which point they can delegate those particular decision-making tasks to automated model predictive control (MPC) loops.
The basic idea here is that we'd be getting our control signals directly from the humans in the loop, and, instead of trying to align the their actions with some externally defined higher level objective construct, we would design our system to pursue the implicitly-defined higher-level objective that arises as we strive to help all of our users, simultanaeously, advance their own objectives. Basically, this comes down to helping them find and exploit the many win-win opportunities that automatically tend to arise in nonzero-sum "games", while deftly avoiding the kinds of multipolar lose-lose traps they might otherwise get stuck in.
How, you may wonder, can we reasonably expect to be able to figure out how to do that, consistently, reliably, and within the time scales available to our human users to decide what control moves to make? Well, I can't offer a closed form proof, but, if you look at some of the recent progress that's been made in advanced MPC approaches, such as distributed MPC, hierarchical MPC, and MPC systems that involve many different agents with differing objectives, differing optionality, and differing prediction horizons, its starting to look more and more as if all you need to do is provide the agents with the models and the basic mechanisms they need to implement their own local control logic, and a near-optimal higher-level control architecture will automatically self-assemble. (where "optimal", in this cases refers to a kind of "consensus optimality" that emerges from that process of self -assembly.
Caveat lector: some of the above involves conjecture on my part, extrapolating just a bit beyond the research results that I have thus far seen published by others, and also deliberately glossed over certain important details that relate to our own work in progress, which I'm not yet ready to share.
Further reading: check out some of the links on my company's website, www.timelike.systems
Intriguing, thanks for the inspiration, will give it a closer read next week!
However, I believe that the main problem is that due to the combination of evolution + finite resources + 5% of people being psychopaths, there are goals which are not reconcilable with others. Or in a more dramatic formulation: Some people are evil and want evil things.
(Side note: Thinking about this formulation which I do not really like due to its drama-factor, I wonder whether, for my personal use, I can define "evil" as "whatever inhibits the majority from reaching goals in line with the common good". So thanks already for putting me on this train of thought.)
Assuming those irreconcilable goals exist, we foremost need some goal-shaping on top of any group-level control mechanism. As I believe in giving agency and the mechanism of a well-regulated market or framework, I believe that once we have the goals aligned and mitigated those which do not fit, the implementation will follow automatically and what method you chose is more a question of trajectory optimisation i.e. tweaking performance.
How do get there, I have no idea. In the past, there have been temporarily successful shots at this. Most of them had a religious taint (Sparta, early Rome, the church in the Middle Ages, faschism, early socialists) but none of them have worked for the greater good. What has worked rather well was the way we have shaped goals with the laws and regulated markets the Western world has established (US in the 50's, EU in some civil and ecological aspects from the 80s onwards, the Scandinavians) before the greedy and power-hungry corrupted the good setup over time and let the pendulum of civilisation swing into the other direction again.
(Sorry for typos or inconsistencies, due to the site acting up, I had to type most of this blindly.)
Some good thoughts, and deserving of a more fulsome reply than I have time to write just now. I do have some ideas on how to "get there", but they'll take some explaining. To get you thinking, however, you might want to google for research results related to cooperative/coalitional games theory and distributed and hierarchical model predictive control.
For linear systems in particular both linear algebra and geometry come together in a unified way that is unique (not possible in nonlinear systems). For the state feedback example that you showed, a geometric interpretation with the column spaces of the various matrices and how they interact could be an intuitive teaching tool for some types of students (like me).
Yes and no - I saw you palm that card. By invoking the assumption that the system could be treated as linear, you assumed away the hard part of the general problem. Instead, suppose your plant is some big, complicated nonlinear causal system, and the initial condition of the system, when we first turn on the feedback loop, is nowhere close to where we want it to be - not in the linearizable close neighborhood of the desired end condition, and not even in the same basin of attraction. Let's assume for the moment that we do have good-enough causal computer model of the system, suitable for use in a model predictive control loop, but that model is not remotely close to being invertible, and there is no guarantee that there exists any possible state trajectory, under control, that will get us to the desired end condition, or anywhere very close to it, but we still want to find a way to get onto a trajectory that will get us as close as possible to it.
This is the general problem that I've been thinking about a lot lately, because it comes up in some of the most important "wicked" roblems confronting us (meaning humanity as a whole), such as how to get onto a trajectory that will carry us safely past the ongoing polycrisis/metacrisis, and into a long-term "protopian" future.
Beyond the particular application to the polycrisis, I really would appreciate seeing classical control situated within modern control, to justify the separation into high-frequency stabilization of linear time-invariant systems with ergodic disturbances and... all the crap we apply MPC or RL to.
I feel that the complexity we are facing is mostly due to the complicating, non-linear and highly dynamic factor of the system (or parts of it) fighting back against the protopian target due to diverging individual targets.
So whichever control rule you come up with, parts of the system will find ways to game it and thwart it to their purposes. See Goodhart's Law.
A great example of this is the German tax law. It tries to go for fairness and has grown so unbelievably complicated over time, but the most greedy and able participants still find enough loop-holes to counteract the intended purpose of fair taxation. Or Formula One.
Does not mean that linear control theory can not give us good ideas and a basic understanding, but I think the main problem for the protopian future is not controlling the complex system itself but aligning the goals for each sub-system (which you can ofc do with control loops).
Hi Notger, and thanks for your reply!
I think you are thinking about the right issues, but I would suggest taking a somewhat different perspective: whenever we have human beings in the loop, i.e. intelligent agents that can and do define their own objectives and act on them, if we try to force them to perform a function that is not in close alignment with their self-defined objectives, they won't like it, so they will, as you say, try to find some way to game the system, thwart those externally imposed objectives, and instead pursue their own objectives.
But now let's consider what would happen if we came at the problem from the oppositie direction: instead of trying to force the people in the loop to do what we want, we could instead design the system so as to help each of our human users advance their own self-defined aims more efficiently and effectively, providing the models and mechanisms needed to provide each user with model-based decision support, thus enabling them to explore the different predicted consequences of different possible course of action open to them, and then choose among them (that's basically what Daniel Kahneman calls "System 2 thinking", i.e. slow, logical, and deliberative), and then, over time, as they find they encounter certain kinds of decision-making tasks over and over again, they will likely find that they can *automate* some of their more routine and/or less consequential decision-making, at which point they can delegate those particular decision-making tasks to automated model predictive control (MPC) loops.
The basic idea here is that we'd be getting our control signals directly from the humans in the loop, and, instead of trying to align the their actions with some externally defined higher level objective construct, we would design our system to pursue the implicitly-defined higher-level objective that arises as we strive to help all of our users, simultanaeously, advance their own objectives. Basically, this comes down to helping them find and exploit the many win-win opportunities that automatically tend to arise in nonzero-sum "games", while deftly avoiding the kinds of multipolar lose-lose traps they might otherwise get stuck in.
How, you may wonder, can we reasonably expect to be able to figure out how to do that, consistently, reliably, and within the time scales available to our human users to decide what control moves to make? Well, I can't offer a closed form proof, but, if you look at some of the recent progress that's been made in advanced MPC approaches, such as distributed MPC, hierarchical MPC, and MPC systems that involve many different agents with differing objectives, differing optionality, and differing prediction horizons, its starting to look more and more as if all you need to do is provide the agents with the models and the basic mechanisms they need to implement their own local control logic, and a near-optimal higher-level control architecture will automatically self-assemble. (where "optimal", in this cases refers to a kind of "consensus optimality" that emerges from that process of self -assembly.
Caveat lector: some of the above involves conjecture on my part, extrapolating just a bit beyond the research results that I have thus far seen published by others, and also deliberately glossed over certain important details that relate to our own work in progress, which I'm not yet ready to share.
Further reading: check out some of the links on my company's website, www.timelike.systems
Intriguing, thanks for the inspiration, will give it a closer read next week!
However, I believe that the main problem is that due to the combination of evolution + finite resources + 5% of people being psychopaths, there are goals which are not reconcilable with others. Or in a more dramatic formulation: Some people are evil and want evil things.
(Side note: Thinking about this formulation which I do not really like due to its drama-factor, I wonder whether, for my personal use, I can define "evil" as "whatever inhibits the majority from reaching goals in line with the common good". So thanks already for putting me on this train of thought.)
Assuming those irreconcilable goals exist, we foremost need some goal-shaping on top of any group-level control mechanism. As I believe in giving agency and the mechanism of a well-regulated market or framework, I believe that once we have the goals aligned and mitigated those which do not fit, the implementation will follow automatically and what method you chose is more a question of trajectory optimisation i.e. tweaking performance.
How do get there, I have no idea. In the past, there have been temporarily successful shots at this. Most of them had a religious taint (Sparta, early Rome, the church in the Middle Ages, faschism, early socialists) but none of them have worked for the greater good. What has worked rather well was the way we have shaped goals with the laws and regulated markets the Western world has established (US in the 50's, EU in some civil and ecological aspects from the 80s onwards, the Scandinavians) before the greedy and power-hungry corrupted the good setup over time and let the pendulum of civilisation swing into the other direction again.
(Sorry for typos or inconsistencies, due to the site acting up, I had to type most of this blindly.)
Some good thoughts, and deserving of a more fulsome reply than I have time to write just now. I do have some ideas on how to "get there", but they'll take some explaining. To get you thinking, however, you might want to google for research results related to cooperative/coalitional games theory and distributed and hierarchical model predictive control.
For linear systems in particular both linear algebra and geometry come together in a unified way that is unique (not possible in nonlinear systems). For the state feedback example that you showed, a geometric interpretation with the column spaces of the various matrices and how they interact could be an intuitive teaching tool for some types of students (like me).