The Quantification Trap
A computational paradox of the postmodern condition.
If we want to make decisions in a complex society, we need a shared language. Experts on the ground must summarize complex situations in their communication with decision makers decoupled from the field. They need to make their experiences legible to those they report to.
The easiest way to make situations legible is to quantify them. To count things, record figures in tables, compute statistics, and make charts. Quantification sorts complexity into simple bins, simplifying communication both up and down the chain.
When we speak in such quantified numerical summaries, our statements feel objective. We believe that appropriate quantification isn’t be subject to the whims and opinions of an individual field worker. By agreeing upon standards, quantified measurements are now scientific facts.
Once we have objectivity, we have authority. Making decisions based on objective facts is obviously in the best interest of everyone else, and we impose threats of chastisement, ostracism, or violence upon those irrational individuals who disagree.
And once these numerical summaries that we made out of whole cloth to simplify communication become authoritative, they become real. They become things we should strive to maximize.
This is the quantification trap.
The quantification trap is social-scientific canon. You could build this story entirely out of texts written before the year 2000. The role of quantification, measurement, and legibility in statecraft is laid out by James C. Scott’s Seeing Like a State (1998) and Alain Desrosières The Politics of Large Numbers (1993). Theodore Porter’s Trust In Numbers (1995) highlights the turn to quantification in pursuit of standardization and objectivity. The blind optimization of decontextualized metrics is core to Jean-François Lyotard’s characterization of The Postmodern Condition (1979).
Twenty-five years into the twenty-first century, I don’t think you should have to run a Science and Technology Studies sidequest to recognize the quantification trap. It’s obvious and almost trite when we say it out loud. It’s trendy to talk about how metrics and benchmarks are bad and to prattle on endlessly about Goodhart’s, Campbell’s, or Murphy’s Laws. And yet, we continue to organize ourselves around statistical summaries. Is the quantification trap an inevitable part of scale? Is it an inevitable part of efficiency? Is it an inevitable part of the dismal hierarchy of bureaucratic power? The great puzzle of our contemporary condition is why it’s so hard to escape.
Part of the puzzle is that making society computable has dramatic benefits paired with every cost. The constant tension in mathematical rationality lies in the interplay between its sweet spots and its limitations. The quantification trap creates an intersubjectivity for collective action. Mathematically rational governance lets systems and hierarchies see, but also makes it easy to maintain their control. It facilitates posing clear questions and objectives, though crowds out nuance and multiplicity. It creates a shared understanding of standardization but removes the discretion of experts. It lets us speak about maximizing the average welfare of populations, but erases individuals.
If there are such clear trade-offs with quantification, why do we always tend to side with “the data?” The acceleration of computation has made the quantification trap exponentially more contagious. As computers became ubiquitous, the quantification process became inevitable and invisible. We don’t think about how we are tethered to unfathomable computing machines. They’re just part of who we are now. Our devices measure us all the time, recording time-on-site and click-throughs. Everything has a like button. All of these measurements are churned upon by data scientists hoping to hit their personal promotion metrics, regardless of whether the instrumentation means anything. The quantification trap is built out of an invisible fabric of computation.
I feel like I say this in the book, but never say it in the Irrational Decision. The book articulates the role of mathematical computation, optimization, and statistics as scaffolding in the elaborate quantification trap. To understand why we optimize what we optimize, it’s helpful to look at the history of computational methods and language boxing us in. The path from legibility to authority goes straight through computation and computerization. Quantification transforms experience into machine-readable data and a small number of interventions and outcomes. Decisions can only be automated once we throw away the messy, uncomputable parts. We maximize averages because it’s a convenient way to model uncertainty.
Now, I am by far not the only person to talk about the quantification trap. I wrote about it today because I felt I needed this placeholder after the last few weeks of talking about my book. However, if you want a reading list from the past 25 years, I can write us an impossibly long bibliography. Even in the past year, crossover books like Healy and Fourcade’s The Ordinal Society and Nguyen’s The Score have articulated the same conundrum.
It’s good that more people are talking about this. What we count, compute, and optimize is a political decision. Counting flattens complexity, and the choice of what is left is a question of power. The virality of the quantification trap forecloses better futures. We can’t strive for them if we can’t see the gilded cage we’re in.


Much of the time I think the 'rush to quantify' is just a panicked kid grasping for the edge of the pool- it's an unfounded conviction that the numbers (and more numbers, always more) will *always* make for better decisions that just so happens to alleviate the existential terror of responsibility and trust, to oneself or others. That's not all bad! Some judgement is unfair and unfounded and, look, science! But of course there are diminishing (and negative) returns to data, just like everything else, and that this point I think the correct default response to 'we're going to revolutionize X through Stasi-esque levels of tracking' is 'eh, probs not.'
There's a whole tranche of science-scented people out there, engineers and technologists and the like, that seem to think that an insistence on measuring and tracking an objective marker is uniformly a *replacement* for irrational vibes, a wholly distinct and superior way of examining the world that swapped out the module of their brain that sought the word of God in burnt entrails, when just watching them it's clear that it's just a new flavor, a way of getting all the relief that the world is unfolding as it should you get from augury but with a dollop of superiority on top that they are doing things with *numbers*, and all the other kids were bad at numbers in school, but they weren't and so are Smart and Good.
Nice entry. As you say, ". . . facilitates posing clear questions and objectives, though crowds out nuance and multiplicity." From National Research Council, 1989, 1996: "Every way of summarizing deaths embodies its own set of values (National Research Council, 1989). For example, reduction in life expectancy treats deaths of young people as more important than deaths of older people, who have less life expectancy to lose. Simply counting fatalities treats deaths of the old and the young as equivalent; it also treats as equivalent deaths that come immediately after mishaps and deaths that follow painful and debilitating disease. Also in the case of delayed illness and death, a simple count of adverse outcomes places no value on what happens to exposed people who may spend years living in daily fear of illness, even if they ultimately do not die from the hazard.
Using number of deaths as the summary indicator of risk implies that it is as important to prevent deaths of people who engage in an activity by choice as it is to prevent deaths of those who bear its effects unwillingly. Thus, the death of a motorcyclist in an accident is given the same weight as the death of the pedestrian hit by the motorcycle. It also implies that it is as important to protect people who have been benefiting from a risky activity or technology as it is to protect those who get no benefit from it. One can easily imagine a range of arguments to justify different kinds of unequal weightings for different kinds of deaths, but to arrive at any selection requires a judgment about which deaths one considers most undesirable. To treat all deaths as equal also involves a judgment. In sum, even so simple and fundamental a choice as how to measure fatalities is value laden. It can present a dilemma in which no single summary measure, no matter how carefully the underlying analysis is done, can satisfy the expectations of all the participants in a risk decision process."