The suboptimal life.
It's ok if we're not optimizing anything.
Programming Note: I’m traveling this week and the posting schedule will be erratic.
Continuing down my path of optimization apostasy, I wanted to respond to Sarah Dean and Max Raginsky about the troubles of Inverse Optimization. If everything is optimization, then everything we see before us must be optimizing something. Inverse Optimization aims to determine which optimization problem gives rise to observed dynamical behavior.
The problem is that anything can be optimal for something. I can imagine a variety of functions for which my coffee cup is a maximum. But why is it helpful to think about what observed reality optimizes? Perhaps it could inform the design of future coffee cups. It could give insights into how my coffee cup came to be in the first place. But for Inverse Optimization to be useful, the fit optimization problem must extrapolate to explain new observations.
Physics provides many compelling examples of the power of Inverse Optimization. Light takes paths that minimize travel times (Fermat’s Principle). Much of modern physics can be derived through principles of least action. Of all possible realities, ours is the one that makes stationary some Lagrangian. How generally can we look at non-engineered systems and determine what they optimize?
I worry that as we push away from the elegantly constrained world of physics, the Inverse Optimization mindset is more harmful than helpful. For example, “Everything Is Optimization” feeds a purely adaptationist view of evolution. For the adaptionist, traits exist because they improve the success of reproduction. But the dynamics of evolution are far more complex. Evolutionary biologists have argued strongly against these adaptionist views, most famously in the essay “The Spandrels of San Marco and the Panglossian Paradigm” by Gould and Lewontin. Lewontin has written extensively about why evolutionary selection can’t entirely be explained by optimizing.
Another prime example where Inverse Optimization fails is (all of) economics. Microeconomics tells us that people maximize utilities and we just have to apply Inverse Optimization to figure out what these utilities are. But countless data show people don’t maximize. And we can’t correct for this lack of maximization with psychological babble about predictable irrationality. Economics and behavioral economics is neither predictive nor generalizable. Fitting an optimization model in one context doesn’t generalize to another context. The evidentiary record of Game Theory shows people play all sorts of different ways depending on culture and experimental set-up, and it is impossible to predict from one experiment how any other will go. What’s worse is that the behavioral and evolutionary econometric “fixes” have all been shown to be unreproducible storytelling or, in recent years, outright fraud.
Making matters worse, when you combine the adaptionist mindset with game theory, you convince yourself that all dominant power structures are explained by strategic just-so stories. Inverse Optimization leads to grotesque ends like cultural evolutionary theory and effective altruism. That somehow we can back out what is optimal from our experience and then think that we can reverse engineer ethics and morality as optimal. This worldview is simultaneously facile and authoritarian yet embraced by many populist intellectuals in the name of science.
Am I arguing there’s a slippery slope to the optimization mindset? I worry that I might be.