I have a defense for the "everything is optimization" viewpoint, possibly based on a misinterpretation of the phrase. My specific claim is the following: every decision is optimal for some objective (probably many!). So the real question is *which* objectives are our decisions optimizing, and how does that compare with what we actually want? In this perspective, optimization is a language for making sense of choosing between options. And as long as we want to go on making choices or taking actions, we can't get away from optimization!
All this to say, perhaps the problem arises when optimization is taken to be prescriptive, rather than descriptive. And it is natural to understand optimization as being prescriptive.
I used to be very into to this idea, but I'm worried that it's hard to find examples of it being useful. Human decisions are poorly modeled as optimal in any sense. Evolution (at least in the adaptationist view) isn't really optimizing anything. Do you have other examples in mind for inverse optimization?
I think I agree that inverse optimization may not be very useful and that optimization is a reductionist lens for describing the world. But I still think it is instructive (like Maximilian says below) particularly *for the person making the decision or designing the decision-making algorithm*. I would argue that in the algorithmic case, designers of automated systems owe it to others to explain their algorithm in terms of an optimization problem. Newsfeeds "ranked in terms of the weighted sum of several personalized predictions" are more easily critiqued when they are described as "designed to maximize engagement". Let me expand on this point later in response to today's post.
I also think that it can be instructive for individuals to model their own decisions in terms of optimization as a method for thinking through things clearly, though this is getting dangerously close to self help. On that topic, though, I recently picked up Le Guin's translation of Lao Tzu's "The Way" and I wonder if its concept of "not doing" is our answer to getting away from the optimization mindset.
Yeah, this is why I pulled back from Bayesian brain approaches. It is instructive to find examples of objectives+costs consistent with some observed behaviour but it's certainly never unambiguous, which means it's limited at the main thing it proposes itself to do: provide a theory of behaviour.
Hi Ben - long time lurker, big fan of the substack. As usual, I largely agree with your thinking here. I'm writing to share with you an instance of a trade-off in my own work and how it's -- slowly, over time -- been getting "resolved". If nothing else, I hope at least the psychology of it all may be interesting :)
In our work on balancing covariates in randomized experiments, we found a fundamental trade-off in choosing a randomization scheme: it is impossible to achieve maximal robustness (i.e. assignments are independent) and maximal covariate balance (i.e. the treatment / control groups look similar). On the one hand, you may want independent assignments to achieve a min-max design when you aren't sure of what the outcomes may be. On the other hand, you may want to balance covariates to improve precision when the outcomes are "related to" the covariates. We proposed a design that allows experimenters to trade-off these two competing objectives. However, we concluded that there is no "right" point to choose in the trade-off -- we don't know the outcomes before we run the experiment, so we pretty much have to take an educated gamble. This is how I thought about our result for a long time.
Fast forward a few years and we developed asymptotic analyses (hear me out) at the request of reviewers. An asymptotic analysis requires a few regularity conditions, which are honestly pretty mild all things considered. But under these regularity conditions, a funny thing happened: the trade-off disappeared! It turned out that there was an "optimal" way to trade-off robustness and covariate balance which ensured that we achieve maximal robustness and maximal covariate balance in the limit. This was surprising to us!
Why am I writing this? Not because I do or don't believe in asymptotic analyses (well...) but rather, to share this instance where a fundamental trade-off over a Pareto frontier turned out to have an "optimal" solution once we view the problem through a "limiting" lens. I wonder what sorts of other "limiting lenses" we can use to obtain a meaningful resolution to Pareto Problems.
That's neat! I was talking to Nik Matni about another set of problems where he could broadly find joint minimizers. I don't want to suggest that these examples don't exist. But I'd love to understand more about *when* they exist. Do we have to get lucky? Or is there a way of thinking about how to design things so that joint optimization is common?
From this particular experience, I think the answer was dependent entirely on our viewpoint. If we viewed things from a purely finite sample perspective, there were two competing objectives, never to be jointly optimized. But when we put on our “limiting lenses” then we found that the two objectives can be jointly minimized in some limit. And although I don’t believe that real world "as it is" is well-modeled by the conditions of the limiting lens, I think that the guidance provided by the limiting lens (here, one well-chosen point of the trade-off) can be helpful. More helpful than saying “it’s a trade off for which I offer you no advice - you choose!”
I have no idea what this means outside of my specific context but: it could be that in order for joint minimizer to exist, we need to look at the overall problem thru some appropriate “limiting lens”. Does this make sense or am I completely rambling? Maybe what I am saying isn't relevant to what you have in mind lol
i love this question, and that you're thinking carefully about it. Trade-offs and multiple objectives of course often arise in management (product, organizational, etc.). In a conversation with Anand Rajaraman last year, he said when he's wearing his VC hat he's often asked about how to trade off objectives like growth and profit. His advice/solution was that you should approach such problems as "optimize one thing subject to constraints on the rest", e.g., optimize growth subject to profits being non-negative. He immediately pointed out the obvious "optimization mindset" critique of this proposal, that this constrained optimization formulation is just making implicit some trade-off in a multi-objective problem, but his point was that: having a single objective to send to the moon provides a tremendous amount of clarity for an organization. Leadership of the form "we are trying to grow as much as possible, subject to non-negative profits" is far clearer than "we are putting 3/4 weight on growth and 1/4 weight on profitability". I found this to be really insightful and have thought about it a lot during the last year, in diverse multi-objective contexts. Mathematically equivalent doesn't mean operationally equivalent. I would love to know if this idea has been written about, Anand's informal comments on the matter were the first time I'd heard it.
Hmm, let's riff on this a bit. Because what Anand describes doesn't sound like optimization to me. Having a singular priority is fine, but this hides all of the associated management costs. Priorities are not the same as optimization. What do you think?
And moreover, people like to say that companies blindly maximize share price, but, like most things in economics, we all know it's not that simple.
Agree this is all an over-simplification, but still find it useful. And agree that management costs/etc aren't really being considered here, so it's a limited "optimization" perspective in that sense.
Regarding riffing, maybe a more useful generalization of my comment here is to say that: your multi-objective challenges can usefully be viewed as implicitly single objective but constrained optimization problems. Then special cases where joint optimization is feasible are where the constraints aren't tight, etc.. So maybe this circles around to be a pro-"optimization mindset" comment, in that: having optimization theory to switch between multi-objective and constrained problems is one useful way to think, operationally, about multi-objective problems.
I like this. Though I still think you end up problems with hyperparameters in the constraints (who's to say what the budget should be for vacation time?).
>> Mathematically equivalent doesn't mean operationally equivalent
Great point! I hadn't considered the leadership viewpoint beforehand, but that makes total sense. I can see that the clarity provided by an explicit single-objective problem would be very helpful in scenarios like this.
Nice post. Here are two directions that you probably already know about, but just in case...
One move, of course, is to set a limiting value (max or min) on all but one of the objectives to convert them into constraints. Then optimize the remaining objective subject to those constraints. In biological conservation, one objective is typically biological (e.g., risk of species extinction) and the other is economic (e.g., cost of the interventions). Setting a dollar value on species extinction is very difficult, but subject matter experts are comfortable (in my experience) with defining a maximum probability of species extinction over some time interval (e.g., 100 years).
Another setting where optimization tools can be useful is in helping the user refine their objectives. One can start with a multi-objective problem and then provide tools for helping the user explore the pareto frontier. This can sometimes help elucidate their tradeoff preferences. There are OR techniques for allowing the user to suggest a point in the trade space and then project that point onto the pareto frontier -- useful for exploring higher dimensional problems. For medical decision making, I would guess that the real goal is to help the patient decide how to navigate the tradeoffs, so perhaps this approach would be useful.
Lurking under the Pareto Curve is the idea that equilibrium are good. But the value of an equilibrium is not a clean mathematical statement! Why are equilibria desirable? This is a normative judgement.
Ben, I'm in agreement with a lot of what you've written so far, but I think your take on the problem of 'multiple objectives' is somewhat unproductive.
It is indeed a category error to think that one can 'optimize' a trade-off. But it is the job of the analyst to expose and make explicit the trade-offs in any given problem – and she can do that using precisely the tools of constrained optimization and Pareto optimality!
This is clear if you adopt a 'feasibility first' approach rather than the 'optimania' approach. That is, first define the variables and their constraints that are relevant to the problem, then impose an ordering of feasible solutions that enable us to pick at least one of them. (This is indeed a domain-specific and sometimes normative design choice, not a problem of optimization per se.)
For instance, in a medical testing scenario we may want to cap the false positive error rate. And among all testing settings that respect this constraint, we wish to minimize the false negative error. The resulting Pareto frontier has no necessary notion of equilibrium that lurks beneath as far as I can see?
At what level should we cap the false positive error? Well, that's not for the optimization analyst to answer! It belongs to the domain of medical policy and decision-making.
The problem isn't that we "can't optimize", it's that it's ill posed. What if one function is the negative of the other? How do you know it isn't?
In real life what usually happens is that one objective becomes a constraint. Where do you think all of those constraints came from in the first place! The barrier to converting an objective into a constraint is the difficulty of knowing what you can live with. Having multiple objectives just means you don't know what minimum you can tolerate, unlike say, nutritional needs.
I have a defense for the "everything is optimization" viewpoint, possibly based on a misinterpretation of the phrase. My specific claim is the following: every decision is optimal for some objective (probably many!). So the real question is *which* objectives are our decisions optimizing, and how does that compare with what we actually want? In this perspective, optimization is a language for making sense of choosing between options. And as long as we want to go on making choices or taking actions, we can't get away from optimization!
All this to say, perhaps the problem arises when optimization is taken to be prescriptive, rather than descriptive. And it is natural to understand optimization as being prescriptive.
I used to be very into to this idea, but I'm worried that it's hard to find examples of it being useful. Human decisions are poorly modeled as optimal in any sense. Evolution (at least in the adaptationist view) isn't really optimizing anything. Do you have other examples in mind for inverse optimization?
I think I agree that inverse optimization may not be very useful and that optimization is a reductionist lens for describing the world. But I still think it is instructive (like Maximilian says below) particularly *for the person making the decision or designing the decision-making algorithm*. I would argue that in the algorithmic case, designers of automated systems owe it to others to explain their algorithm in terms of an optimization problem. Newsfeeds "ranked in terms of the weighted sum of several personalized predictions" are more easily critiqued when they are described as "designed to maximize engagement". Let me expand on this point later in response to today's post.
I also think that it can be instructive for individuals to model their own decisions in terms of optimization as a method for thinking through things clearly, though this is getting dangerously close to self help. On that topic, though, I recently picked up Le Guin's translation of Lao Tzu's "The Way" and I wonder if its concept of "not doing" is our answer to getting away from the optimization mindset.
Yeah, this is why I pulled back from Bayesian brain approaches. It is instructive to find examples of objectives+costs consistent with some observed behaviour but it's certainly never unambiguous, which means it's limited at the main thing it proposes itself to do: provide a theory of behaviour.
Hi Ben - long time lurker, big fan of the substack. As usual, I largely agree with your thinking here. I'm writing to share with you an instance of a trade-off in my own work and how it's -- slowly, over time -- been getting "resolved". If nothing else, I hope at least the psychology of it all may be interesting :)
In our work on balancing covariates in randomized experiments, we found a fundamental trade-off in choosing a randomization scheme: it is impossible to achieve maximal robustness (i.e. assignments are independent) and maximal covariate balance (i.e. the treatment / control groups look similar). On the one hand, you may want independent assignments to achieve a min-max design when you aren't sure of what the outcomes may be. On the other hand, you may want to balance covariates to improve precision when the outcomes are "related to" the covariates. We proposed a design that allows experimenters to trade-off these two competing objectives. However, we concluded that there is no "right" point to choose in the trade-off -- we don't know the outcomes before we run the experiment, so we pretty much have to take an educated gamble. This is how I thought about our result for a long time.
Fast forward a few years and we developed asymptotic analyses (hear me out) at the request of reviewers. An asymptotic analysis requires a few regularity conditions, which are honestly pretty mild all things considered. But under these regularity conditions, a funny thing happened: the trade-off disappeared! It turned out that there was an "optimal" way to trade-off robustness and covariate balance which ensured that we achieve maximal robustness and maximal covariate balance in the limit. This was surprising to us!
Why am I writing this? Not because I do or don't believe in asymptotic analyses (well...) but rather, to share this instance where a fundamental trade-off over a Pareto frontier turned out to have an "optimal" solution once we view the problem through a "limiting" lens. I wonder what sorts of other "limiting lenses" we can use to obtain a meaningful resolution to Pareto Problems.
That's neat! I was talking to Nik Matni about another set of problems where he could broadly find joint minimizers. I don't want to suggest that these examples don't exist. But I'd love to understand more about *when* they exist. Do we have to get lucky? Or is there a way of thinking about how to design things so that joint optimization is common?
From this particular experience, I think the answer was dependent entirely on our viewpoint. If we viewed things from a purely finite sample perspective, there were two competing objectives, never to be jointly optimized. But when we put on our “limiting lenses” then we found that the two objectives can be jointly minimized in some limit. And although I don’t believe that real world "as it is" is well-modeled by the conditions of the limiting lens, I think that the guidance provided by the limiting lens (here, one well-chosen point of the trade-off) can be helpful. More helpful than saying “it’s a trade off for which I offer you no advice - you choose!”
I have no idea what this means outside of my specific context but: it could be that in order for joint minimizer to exist, we need to look at the overall problem thru some appropriate “limiting lens”. Does this make sense or am I completely rambling? Maybe what I am saying isn't relevant to what you have in mind lol
i love this question, and that you're thinking carefully about it. Trade-offs and multiple objectives of course often arise in management (product, organizational, etc.). In a conversation with Anand Rajaraman last year, he said when he's wearing his VC hat he's often asked about how to trade off objectives like growth and profit. His advice/solution was that you should approach such problems as "optimize one thing subject to constraints on the rest", e.g., optimize growth subject to profits being non-negative. He immediately pointed out the obvious "optimization mindset" critique of this proposal, that this constrained optimization formulation is just making implicit some trade-off in a multi-objective problem, but his point was that: having a single objective to send to the moon provides a tremendous amount of clarity for an organization. Leadership of the form "we are trying to grow as much as possible, subject to non-negative profits" is far clearer than "we are putting 3/4 weight on growth and 1/4 weight on profitability". I found this to be really insightful and have thought about it a lot during the last year, in diverse multi-objective contexts. Mathematically equivalent doesn't mean operationally equivalent. I would love to know if this idea has been written about, Anand's informal comments on the matter were the first time I'd heard it.
Hmm, let's riff on this a bit. Because what Anand describes doesn't sound like optimization to me. Having a singular priority is fine, but this hides all of the associated management costs. Priorities are not the same as optimization. What do you think?
And moreover, people like to say that companies blindly maximize share price, but, like most things in economics, we all know it's not that simple.
Agree this is all an over-simplification, but still find it useful. And agree that management costs/etc aren't really being considered here, so it's a limited "optimization" perspective in that sense.
Regarding riffing, maybe a more useful generalization of my comment here is to say that: your multi-objective challenges can usefully be viewed as implicitly single objective but constrained optimization problems. Then special cases where joint optimization is feasible are where the constraints aren't tight, etc.. So maybe this circles around to be a pro-"optimization mindset" comment, in that: having optimization theory to switch between multi-objective and constrained problems is one useful way to think, operationally, about multi-objective problems.
I like this. Though I still think you end up problems with hyperparameters in the constraints (who's to say what the budget should be for vacation time?).
>> Mathematically equivalent doesn't mean operationally equivalent
Great point! I hadn't considered the leadership viewpoint beforehand, but that makes total sense. I can see that the clarity provided by an explicit single-objective problem would be very helpful in scenarios like this.
Nice post. Here are two directions that you probably already know about, but just in case...
One move, of course, is to set a limiting value (max or min) on all but one of the objectives to convert them into constraints. Then optimize the remaining objective subject to those constraints. In biological conservation, one objective is typically biological (e.g., risk of species extinction) and the other is economic (e.g., cost of the interventions). Setting a dollar value on species extinction is very difficult, but subject matter experts are comfortable (in my experience) with defining a maximum probability of species extinction over some time interval (e.g., 100 years).
Another setting where optimization tools can be useful is in helping the user refine their objectives. One can start with a multi-objective problem and then provide tools for helping the user explore the pareto frontier. This can sometimes help elucidate their tradeoff preferences. There are OR techniques for allowing the user to suggest a point in the trade space and then project that point onto the pareto frontier -- useful for exploring higher dimensional problems. For medical decision making, I would guess that the real goal is to help the patient decide how to navigate the tradeoffs, so perhaps this approach would be useful.
Lurking under the Pareto Curve is the idea that equilibrium are good. But the value of an equilibrium is not a clean mathematical statement! Why are equilibria desirable? This is a normative judgement.
I don't understand this point (perhaps because I haven't read much economics). Can you say more?
Ben, I'm in agreement with a lot of what you've written so far, but I think your take on the problem of 'multiple objectives' is somewhat unproductive.
It is indeed a category error to think that one can 'optimize' a trade-off. But it is the job of the analyst to expose and make explicit the trade-offs in any given problem – and she can do that using precisely the tools of constrained optimization and Pareto optimality!
This is clear if you adopt a 'feasibility first' approach rather than the 'optimania' approach. That is, first define the variables and their constraints that are relevant to the problem, then impose an ordering of feasible solutions that enable us to pick at least one of them. (This is indeed a domain-specific and sometimes normative design choice, not a problem of optimization per se.)
For instance, in a medical testing scenario we may want to cap the false positive error rate. And among all testing settings that respect this constraint, we wish to minimize the false negative error. The resulting Pareto frontier has no necessary notion of equilibrium that lurks beneath as far as I can see?
At what level should we cap the false positive error? Well, that's not for the optimization analyst to answer! It belongs to the domain of medical policy and decision-making.
The problem isn't that we "can't optimize", it's that it's ill posed. What if one function is the negative of the other? How do you know it isn't?
In real life what usually happens is that one objective becomes a constraint. Where do you think all of those constraints came from in the first place! The barrier to converting an objective into a constraint is the difficulty of knowing what you can live with. Having multiple objectives just means you don't know what minimum you can tolerate, unlike say, nutritional needs.