Discussion about this post

User's avatar
Sarah Dean's avatar

I have a defense for the "everything is optimization" viewpoint, possibly based on a misinterpretation of the phrase. My specific claim is the following: every decision is optimal for some objective (probably many!). So the real question is *which* objectives are our decisions optimizing, and how does that compare with what we actually want? In this perspective, optimization is a language for making sense of choosing between options. And as long as we want to go on making choices or taking actions, we can't get away from optimization!

All this to say, perhaps the problem arises when optimization is taken to be prescriptive, rather than descriptive. And it is natural to understand optimization as being prescriptive.

Expand full comment
Christopher Harshaw's avatar

Hi Ben - long time lurker, big fan of the substack. As usual, I largely agree with your thinking here. I'm writing to share with you an instance of a trade-off in my own work and how it's -- slowly, over time -- been getting "resolved". If nothing else, I hope at least the psychology of it all may be interesting :)

In our work on balancing covariates in randomized experiments, we found a fundamental trade-off in choosing a randomization scheme: it is impossible to achieve maximal robustness (i.e. assignments are independent) and maximal covariate balance (i.e. the treatment / control groups look similar). On the one hand, you may want independent assignments to achieve a min-max design when you aren't sure of what the outcomes may be. On the other hand, you may want to balance covariates to improve precision when the outcomes are "related to" the covariates. We proposed a design that allows experimenters to trade-off these two competing objectives. However, we concluded that there is no "right" point to choose in the trade-off -- we don't know the outcomes before we run the experiment, so we pretty much have to take an educated gamble. This is how I thought about our result for a long time.

Fast forward a few years and we developed asymptotic analyses (hear me out) at the request of reviewers. An asymptotic analysis requires a few regularity conditions, which are honestly pretty mild all things considered. But under these regularity conditions, a funny thing happened: the trade-off disappeared! It turned out that there was an "optimal" way to trade-off robustness and covariate balance which ensured that we achieve maximal robustness and maximal covariate balance in the limit. This was surprising to us!

Why am I writing this? Not because I do or don't believe in asymptotic analyses (well...) but rather, to share this instance where a fundamental trade-off over a Pareto frontier turned out to have an "optimal" solution once we view the problem through a "limiting" lens. I wonder what sorts of other "limiting lenses" we can use to obtain a meaningful resolution to Pareto Problems.

Expand full comment
15 more comments...

No posts