Discussion about this post

User's avatar
Maxim Raginsky's avatar

I am in total agreement with all of this! My personal view is that (almost) everything is control, not optimization, but control understood in the Willemsian sense: restricting the behavior of one system by interconnecting it with another. There need not be any notion of optimality here: some system trajectories are simply forbidden, and then we can talk about how (or why) some of the allowed trajectories are selected or actualized. Borrowing a pithy phrase from Peter Gould (no relation to Stephen Jay Gould), control in general is about "allowing, forbidding, but not requiring." So, for example, nonadaptationist evolutionary theories, such as genetic drift, fall into this category. The environment simply rejects (or forbids) nonviable options, but which particular path is taken depends on lots of factors, including local optimization-like mechanisms without any global optimization. This is also the case in social/economic settings: Laws, regulations, customs, etc. forbid certain outcomes, but which of the allowed outcomes is selected may not be the result of optimization, although norms, values, etc. certainly come into play.

Expand full comment
Sarah Dean's avatar

A pet peeve of mine is the muddied thinking that arises from conflating biological phenomena and computer algorithms. On the computer science side, it obscures the fact that we are responsible for the outcomes since it's our choices that determine the system's behavior. And on the biology side, we collapse the complexity into the crude and comparatively simplistic world of computation. So I am looking forward to reading the linked critique of adaptionist views! Many of the points you make here ring true.

Now let me expand on the "we are responsible" point. Where "everything is optimization" is most meaningful for me is when "everything" refers to "things that we build". To reiterate my comment on your last post, I think that automated systems are most clearly understood through the optimization lens. For example, the public "For you" Twitter algorithm ranks tweets as a weighted sum of the probability of various types of engagement actions: https://slides.com/sarahdean-2/deck-4f230b?token=HmEsTam3#/7

It is fun to try to interpret these weights. Why is the weight on "report" -369 rather than -370? Why is the weight on "watch half of the video" 0.005 when all other weights are O(1)? I can only speculate that upranking videos led to a bad user experience but there was internal pressure to "promote videos" so this was the compromise. Or maybe the predicted probability is often very close to 1 (since videos autoplay?) and 0.005 brings them into the same scale as other predicted engagement actions.

All this to say, these weights were almost certainly decided with reference to some metrics, KPIs, etc. Many people have made the point that open sourcing the algorithms is meaningless without access to the engagement models. I would argue that these metrics/KPIs are similarly important---from the perspective of critique and from the perspective of thinking clearly about platform design. I agree with something Neal Parikh said on Twitter: there is clarifying value in a declarative representation of what a system is built for.

Expand full comment
2 more comments...

No posts