This post, um, tangentially digs into Lecture 8 of Paul Meehl’s course “Philosophical Psychology.” But it ties the material into current events. You can watch the video of the Lecture here. Here’s the full table of contents of my blogging through the class.
Yesterday on Twitter Dot Com, Tyler Harper kicked a hornet’s nest:
“The debates about what the research does or doesn’t show about the dangers of smartphone use are a distraction. WE CAN ALL TELL THE PHONES ARE BAD! I don’t mean to do a science denial but I just don’t care what the studies say. Anyone with working eyes knows there is a problem.”
Aha! The studies. Well, what studies are those exactly? It’s Twitter so, of course, none of the people yelling at Tyler provided any links. Now, I don’t want to do a literature survey on phones and depression either,1 because I, and all of you who have been following along with this blog series, know the literature is an uninterpretable mess.
The conventional wisdom—espoused here by Osita Nwanevu—is that it is always better to look at studies than not. But what if that's not true? What if the studies are just wrong? If social psychologists use tools proven insufficient to measure anything, why should we care what they say?
The fallacy expressed by Nwanevu is that making decisions based on vibes is bad and science is less value-laden or vibes-based. This couldn't be farther from the truth! We all know that science is a social effort. Biases persistently manifest themselves in the scientific literature. Recent work by Winsberg and Harvard describes how sneakily value judgments make their way into models. Add some null hypothesis significance testing and some crud, and the science will show you whatever you want to see. Then add to the mix that scientists love to be quoted “in the press,” and we end up incentivizing research as meme generation.
In Lecture 8, I have been arguing for open data. Part of this is in hopes that we can spend more time understanding the depths of unreliability of published literature. Open-data requirements don’t prevent the publication of incorrect findings. Do you guys remember science during the pandemic? Oh boy, I have been so reluctant to bring that up again. But for a while, when trapped at home with nothing to do, I was downloading all sorts of papers and all of them were wrong. If you give me any observational study about COVID19, I’m now able to find a major flaw in 60 seconds. This one got some attention. But that was just one of dozens of deeply flawed papers I looked at. It didn’t matter if the mainline result was based on incorrect data downloaded from a public source.
After an economist recently insulted his expertise, Rex Douglass, decided to be a glutton for punishment and really dig into a COVID study. In this beautifully detailed report, he shows how said economist’s paper is based on complete mishandling of big data sets, averaging things that shouldn’t be averaged, and making arbitrary decisions to make their desired conclusion look better.
As Rex has noted, this process of debunking isn’t sustainable. Rex, with no gain for himself, spent hours digging through the paper’s code, wrote a long blog post, and yet the original paper remains published. This is the norm. Most papers don’t get retracted. All of the authors still get gold stars on their CVs. The press will write up studies as if they are true, and politicians will cite them in their policy briefs. If you follow the rules and rituals, you can do science solely for political talking point generation. That’s deeply dangerous.
I don’t want to try to argue that debunking should be made more sustainable. I have a more radical proposal. What if we start from the assumption that everything in quantitative social science is wrong? What if we just ignored these papers and used our eyes? I’m not saying that’s perfect. I’m not saying that this will avoid moral panics. I’m just saying that The Science is making the situation worse.
Nicholas Wilkey challenged me about what to do instead. “There are endless critiques, but precious little positive solutions. I say this as someone who works in policy, rather being an academic.” This position is common, even among academics. I’ve been arguing (collegially) with my friend Avi Feller about this for years now. I have some thoughts about this based, oddly enough, on the writing of Karl Popper. I’ll share in a future post.
But for today, let me quote David Graeber (making his second appearance in this blog series), who, as usual, sums up my position better than I can. In Bullshit Jobs, he writes:
“Another reason I hesitate to make policy suggestions is that I am suspicious of the very idea of policy. Policy implies the existence of an elite group—government officials, typically—that gets to decide on something (‘a policy’) that they then arrange to be imposed on everybody else.”
The problem is that scientific elites don’t know better. Our capabilities to know and understand human behavior using the scientific method have been demonstrated so deeply fallible that I don’t even know where to begin. And yet policymakers cite these papers in their briefs, experts get quoted in court documents, and we race to the bottom to find “evidence” that justifies whatever position we want. This leads to, as Tyler says, gaslighting. Anyone with working eyes sees one thing, but you can find any random scientist to cook a study to claim “the evidence” points the other way. Again, we shouldn’t be pointing to an uninterpretable mess as evidence for anything. If those techniques are failing us, why should we outsource decisions to The Science? Especially when answers are staring us in the face.
I poked around, and most of the commentary was hating on Jonathan Haidt. I get it. Haidt is a preachy tut-tutter. But the counter-evidence is terrible as expected. In this Nature editorial, for example, the author cites not only a bunch of random metaanalyses but also a bizarro study using FMRI to spot brain changes from screen time. When you are leaning on FMRI to make your case, you have lost the argument.
I agree with you on the specifics--whether smartphones and social media are responsible for depression or not, they are clearly doing something, and not allowing them in classrooms shouldn't be controversial; the case for NPIs was made way too strongly (it was your blogposts on the Bangladesh mask study that made me realize how flimsy the effects were)--but I am not sure I buy the "What if we start from the assumption that everything in quantitative social science is wrong? What if we just ignored these papers and used our eyes?" alternative (and its ostensible reason: "The Science is making the situation worse."). (You're probably exaggerating for effect here but still.)
For one thing, if we stopped the studies and used our eyes, we'd still be arguing about those things and we'd be arguing about them even more angrily than we do now. The sociologist Gil Eyal has a great way of putting this in his book The Crisis of Expertise. He says that what happened over the second half of the 20th century is that we started to use science more and more to solve political questions (should we build this dam? what NPI is the best for infectious diseases?) and in the process, as our political questions became fractious, we ended up politicizing science too. And it's not clear that descientizing politics will lead to any less conflict.
Woodhouse and Nieusma have a nice article called "When expert advice works, and when it does not" and they argue that everyone's theories of experts and expertise have two component theories: a simplified one and a cynical one. According to the simplified theory, experts do what they do because they are good at it. According to the cynical theory, experts only serve the powerful. The problem is that a larger theory built out of these two components is always applied in an inconsistent way. When experts say what we believe, we use the simplified theory to assess them. When they say things we don't like, we immediately switch to the cynical one. This kind of thing is done not just by the general public but I would say even by STS scholars and I think it works to everyone's detriment.
I would much prefer what you call the conventional view that more studies--even with all their attendant problems--are better than not.
Not that I have a solution! Though have you read Daniel Sarewitz's Why science makes environmental controversies worse? It is one of my favorites and he does have a solution although I find it hard to translate it into a programmatic form.
Thanks for staking this out so clearly.
Reframings im working on:
What *does* social media cause?
What causes *me*?