18 Comments
User's avatar
Shreeharsh Kelkar's avatar

I agree with you on the specifics--whether smartphones and social media are responsible for depression or not, they are clearly doing something, and not allowing them in classrooms shouldn't be controversial; the case for NPIs was made way too strongly (it was your blogposts on the Bangladesh mask study that made me realize how flimsy the effects were)--but I am not sure I buy the "What if we start from the assumption that everything in quantitative social science is wrong? What if we just ignored these papers and used our eyes?" alternative (and its ostensible reason: "The Science is making the situation worse."). (You're probably exaggerating for effect here but still.)

For one thing, if we stopped the studies and used our eyes, we'd still be arguing about those things and we'd be arguing about them even more angrily than we do now. The sociologist Gil Eyal has a great way of putting this in his book The Crisis of Expertise. He says that what happened over the second half of the 20th century is that we started to use science more and more to solve political questions (should we build this dam? what NPI is the best for infectious diseases?) and in the process, as our political questions became fractious, we ended up politicizing science too. And it's not clear that descientizing politics will lead to any less conflict.

Woodhouse and Nieusma have a nice article called "When expert advice works, and when it does not" and they argue that everyone's theories of experts and expertise have two component theories: a simplified one and a cynical one. According to the simplified theory, experts do what they do because they are good at it. According to the cynical theory, experts only serve the powerful. The problem is that a larger theory built out of these two components is always applied in an inconsistent way. When experts say what we believe, we use the simplified theory to assess them. When they say things we don't like, we immediately switch to the cynical one. This kind of thing is done not just by the general public but I would say even by STS scholars and I think it works to everyone's detriment.

I would much prefer what you call the conventional view that more studies--even with all their attendant problems--are better than not.

Not that I have a solution! Though have you read Daniel Sarewitz's Why science makes environmental controversies worse? It is one of my favorites and he does have a solution although I find it hard to translate it into a programmatic form.

Expand full comment
Ben Recht's avatar

Thanks for this very detailed comment. You make great points (and I've added Sarewitz to my reading list).

I want to emphasize that I'm not against expertise. My argument in this post and in general is against a particular breed of data science that has zombified quantitative social science. Pulling such quant social science out of policy might not reduce conflict, but I don't agree with your argument that it would increase conflict.

The problem with these Q.S.S. studies is that they flood the zone. 1. They are ALL WRONG. 2. No one bothers to check and just refers to the title of the paper. 3. None of the policymakers or judges has the expertise required to understand this, but the evidence is added to the policy briefs or court decisions anyway. This is a disaster.

An example that sticks in my mind are discrimination law, where you get cases like SFFA vs Harvard with economists flooding the zone with insane regressions adding hundreds of pages to supreme court briefs. The stats makes the arguments more heated and more annoying! And we forget the crux of the issue is that we should nationalize Harvard.

Expand full comment
Shreeharsh Kelkar's avatar

Thanks for the reply and for engaging. I agree that it's not clear that arguing based on vibes would increase conflict though I certainly can't see it *decreasing* the intensity of the conflict.

If I may, though, I would characterize your position here as "I am for expertise except for this particular group of experts that I either (do not like/think are wrong/have bad methods/are working for people I do not like)." I think most people (laypeople and experts) today hold this belief. Obviously the experts in question will be different: John Quiggin above thinks "casting doubt" on the smartphone study is okay because it's a moral panic; I assume he's not so much in favor of your "casting doubt"-type of writings on the Bangladesh mask study. But when everyone thinks this way and things get polarized enough that people hate different experts in equal numbers, we get to where we are today (large incidence of generalized institutional distrust).

I think it would help to tease out the local issue (problems of warrant and methodology in QSS) and the big picture (what should be the role of expertise in decision-making?) in your statement. Your point about being against a certain kind of quantitative social science rings true to me (even though I am neither a statistician nor an economist; at most, I am some kind of informed consumer who likes to read Andrew Gelman's blog, and even there, I get lost when people really start arguing). For you as a participant, it makes sense to push against what you think of as bad methodology that gets followed.

But I think when we fight our local battles, we also want to be careful of our broader messaging because we do want to build a world where studies matter in our decision-making and these local fights can spill over into the larger problem of trust in experts. (And arguably, the larger distrust problem is the result of these smaller battles spilling out, aided by a fractured media landscape.) I have some thoughts on this but this is not the place to go into them.

Re: SFFA vs Harvard, I don't know anything about the economists' studies, but even if we took those out, it's not clear to me the issue will get any less contentious. I'm not sure what nationalizing Harvard means (but sure sounds contentious!) but for me, the issue has always been that Harvard doesn't matter. Selective schools educate very few students and the bulk of higher education students go to non-selective public colleges (a large chunk in community colleges)--all of which I am sure you know (and sorry if this sounds pedantic).

And all of selective public schools (who dwarf the selective privates) could easily solve this problem if they gave preferential treatment to lower-income students. This would both solve the problem of recruiting from underrepresented minorities and completely avoid the constitutional issue of discrimination. But they don't. And this Politico article has convinced me (https://www.politico.com/news/magazine/2023/09/15/supreme-court-admissions-elite-schools-00116087) that the reason has to do with these institutions wanting to keep their criteria for admitting students as much in their control as possible. And why not? These institutions (our own, among them), see themselves as cultivating the next elite and so, of course, that requires them to keep their admissions criteria ambiguous. Clear, transparent criteria that you commit yourself to can get in your way. So I guess we're now left endlessly arguing over SFFA vs Harvard and the economists pitch in with their regression analyses! But even without them, I doubt we'd reach an agreement.

Again, thanks for engaging. I look forward to your post on the economists' briefs in SFFA versus Harvard!

Expand full comment
Ben Recht's avatar

I mean, do studies in astrology matter? Does expertise in astrology matter? Calling out fields built on sand matters. I'm strongly opposed to the notion of deference to expertise for the sake of expertise. And yes, it is a political decision about where we draw the line.

And all I mean by nationalize Harvard is that the presitge associated with Harvard only exists because of accepted delusion that a Harvard degree is worth more than a UMass degree. Why we collectively allow Harvard and its 50 billion dollar endowment to stamp "prestige" on degrees is a collective choice. We could either force Harvard to multiply its class size by 10. Or we could tax the endowment. Either is fine by me and far preferable to arguing about which 2000 lucky duckies get their magic slip of Harvard stationary.

Expand full comment
Shreeharsh Kelkar's avatar

Of course, studies in astrology do not matter. That's because there's no sizable constituency of astrologers (or their supporters) out there saying: Hey, astrologers should be consulted before we institute mask mandates or before we decide to set up this nuclear power plant. But we do have epidemiologists and statisticians and public health people and virologists all debating the effectiveness of NPIs and vociferously disagreeing about it. (If and when a social movement for taking astrology seriously takes hold--it will probably take decades--then, yes, it will matter.)

I think we agree on a lot more than we disagree. I am not suggesting that we should defer to expertise for the sake of deferring to it. I am saying that we need to be able to talk about the problem of how one might disagree with experts one thinks are wrong without it also undercutting expertise *in general*. One of the things we know is that the sight of highly credentialed experts cutting each other down to size on a high-stakes issue leads laypeople to distrust experts in general (I mean, here they are with PhDs and everything and they can't agree on anything?). When the disagreement took place in the backstage, that was okay but that's not a luxury scientists--and especially social scientists--have anymore in this fractured media landscape.

The point is not to stop disagreeing but to think about the spaces one has these disagreements and to construct boundaries differently. In her book Designs on Nature, Sheila Jasanoff has an interesting comparison of the American, British and German "style" of regulation (of biotechnology). The American polity foregrounds conflict: truth is supposed to come out of adversarial experts viciously debating each other in a public setting. In the German polity, there is an emphasis on representativeness; so all parties that have a stake in an issue are put in a committee but the deliberations of the committee itself happen in private behind closed doors. The former is "open"; the latter less so, but in the first, people can see experts debating each other and even if one expert wins for one issue, it undercuts expertise in general especially when more and more issues become high-stakes. The latter is not perfect either (I don't want it to sound as if Europe has all the answers). The point is that objectivity is constructed but it can be constructed differently and lines between politics and science can be drawn in different ways.

Anyway, we seem to have drifted quite a bit. I only wrote this comment because of your line "What if we start from the assumption that everything in quantitative social science is wrong? What if we just ignored these papers and used our eyes?" My point was just that (a) using our eyes won't make our conflicts any less intense, and (b) taking the stance "all QSS is wrong" is probably not the right line to draw if we do think expertise matters in some way.

I'm going to stop here! Thanks for engaging.

Expand full comment
Kelian Dascher-Cousineau's avatar

"1. They are ALL WRONG. 2. No one bothers to check and just refers to the title of the paper. 3. None of the policymakers or judges has the expertise required to understand this, but the evidence is added to the policy briefs or court decisions anyway." I really struggle to believe that any of these 3 points are true. Is this really the position you hold? Is this, in your opinion further from the truth? - 1) many of the papers have varying degrees of errors ranging from typos to fraud. 2) People delve into the literature to varying degrees, most indeed don't reproduce the code and 3) some policy maker are more thorough than others in reading the scientific material and likely rely on experts who have read the material to parse the literature (and rely on (2) to help with (1)).

Expand full comment
Ben Recht's avatar

With regards to quantitative social science and epidemiology, it is my position. The fields are built upon methodological fallacy. And I am against granting them political power.

Expand full comment
Kevin Munger's avatar

Thanks for staking this out so clearly.

Reframings im working on:

What *does* social media cause?

What causes *me*?

Expand full comment
Ben Recht's avatar

I do love that question. "What is the cause of me?"

Expand full comment
John Quiggin's avatar

At least in this instance "the studies" aren't so much claiming to prove that smartphones are harmless as challenging the certainties of the person quoted at the beginning of the article. Given the regularity of moral panics about what the kidz are doing, that's a good thing.

Expand full comment
Ben Recht's avatar

That's an interesting libertarian twist: we should encourage more studies because it muddies the water and prevents action.

Expand full comment
Andy Berner's avatar

I've been hugely enjoying this series! Introduction to Meehl has been a breath of fresh air.

When we think about the interaction between science and policy, how do we (as a society) adjust for the different degrees of "knowability" between disciplines? E.g. the chemistry behind ozone depletion one can show pretty convincingly between a lab bench demonstration of the reaction and direct observations of the atmosphere, and this had a clear policy implication for using CFCs. By comparison, projecting what NPIs would be most effective during a pandemic is a hugely difficult task to begin with, but as you note, the social incentives of Science these days discourage humility and encouraged a bunch of folks without the chops to do meaningful analysis to look for notoriety via Science by Press Release.

Disciplines that *can* be more rigorous _aren't always_, of course. The late 2000's/early 2010's in climate science had a lot of dumb wagon circling around some questionable research, got tribal with researchers who wouldn't fully toe the "party line," etc., all because folks who should be trying to do good science started thinking they could/should shape policy. Seems like the first step is trying to get back to a cultural norm/incentive structure that rewards research quality first, then separately need to sort out the issue of how governments produce policy in reference to that work (or not, in the case where everyone can see the body of available work isn't actually meaningful).

Expand full comment
Ben Recht's avatar

Your comment gets at a pretty critical question: where do we draw the line with regards to expertise? Especially in policy. For example, the FDA exists to make sure that toxic drugs aren't released into the market. This requires regulatory mechanisms to evaluate safety of pharmaceuticals. This necessitates biomedical expertise. But where do we draw the line with the mandate and size and scope of the FDA? That's not a scientific question.

And a lot of the arguments I'm making are against contemporary economics. People will insist that you need to do some sciency stuff to set tax rates. But this has become a self-fulfilling prophecy. When a tax code is rigid and only updated every decade or so, a "science-based" one isn't actually better than one hashed out by vibes. I'm going to try to expand on this point in a future post.

Expand full comment
Misha Belkin's avatar

Anyone with working eyes can see that the Sun orbits the Earth.

Expand full comment
Misha Belkin's avatar

Just to be clear, I agree with some of your points. But "seeing with eyes" is not an alternative to flawed science.

Expand full comment
Alexei Kapterev's avatar

The argument assumes that when we "use our eyes," we don't use studies. But I don't think that's true. We see the world through the lens of the OLD studies or "studies" we were taught in school or elsewhere. Our conclusions are always theory-laden. When people talk about psychology, they can "use their eyes" and draw conclusions based on

1) folk psychology

2) Freud

3) Humanistic Psychology

4) New Age

5) Contemporary academic psychology

It's not that any of these are value-free. Folk psychology is great (as they say, it's the only part of psychology that replicates well), but very few people actually use it. Most use something else. And compared to 2), 3), and 4), contemporary social science doesn't look so bad. So what's the reason for us to stop updating our beliefs now?

Expand full comment
Ludwig Yeetgenstein's avatar

Just wondering, what are the answers staring us in the face when it comes to cell phones?

Expand full comment
Ben Recht's avatar

Answers might not be the right word, but it is indisputable that phones have dramatically reshaped how we interact with the world.

That said, it is wild to me that whether we should allow cell phones in K-12 classrooms is "controversial."

Expand full comment