This is the final post about Paul Meehl’s course “Philosophical Psychology.” Here’s the full table of contents of my blogging through the class.
My friends, we find ourselves at the end of July and the end of Meehl Blogging. There is one more lecture in the series, but the video is corrupted, so it’s hard for me to figure out what to say. No matter, I think psychoanalysis makes for a fitting end, and I’m calling it there.
I’ve spent the last few days collecting my thoughts and pondering a coda. I don’t have a poetic summary just yet. I thought this blog series would take a few weeks, but it ended up consuming three months. I guess that’s a semester class! Maybe I can get course credit? Kidding aside, one of the great joys in life is taking a new class, learning new things, and completely reshaping how I think about a subject. It’s always good to remind myself this remains possible no matter how old I get. But I’m going to have to take an incomplete on my term paper as I can’t do this course justice just yet.
Let me give a quick high-level take on my current estimation of my most valuable takeaways.
Lakatosian Defense. Meehl’s synthesis of 20th-century philosophy of science, what he calls metatheory, is better aligned with my views than anyone else I’ve encountered. This “equation” that I’ve posted a dozen times has completely reshaped how I think about scientific evidence and argumentation.
Meehl pulls so many pieces together in this simple formulation: Popper, Duhem-Quine, Reichenbach, Salmon, Kuhn, Feyerabend, Lakatos. From this vantage point, you can tie the logical aspects to the social aspect. Deduction to induction. You can reconcile post-Latourian science studies with logical positivism. It’s a beautiful, elegant framing that pulled together so much of what I’ve been thinking about for the past half-decade.
The poverty of null hypothesis testing. Before engaging with these lectures, Meehl was one of my prime gotos for critiquing the tyranny of frequentist hypothesis testing. But hearing him talk about it helped me understand the broader context of his interest and the nuances of his critique. It’s wild how we torture ourselves with this broken inference framework. The Fisher, Neyman, and Pearson program from the 1930s is just broken. It isn’t fixable. But social conventions stick us with this, guaranteeing we will always be confused. These lectures highlight the absurdity of applying a Lakatosian strategy to understand social relations. It’s the wrong toolset and has led to nothing but widespread confusion and invasive bureaucratic kudzu. Let me not get stuck here again today… perhaps this will be the topic of my term paper.
Prediction is quantitative, inference is qualitative. The last three lectures on probability, prediction, and psychoanalytic inference, might have been the most valuable part of the class for me. Finally, for the first time in my career, I feel sure-footed about probability. I’m serious. Once you realize that Carnap’s Probability 1 isn’t numbers, you can be more relaxed about the entire probabilistic endeavor. Statistical prediction is a powerful concept but has limited scope, no matter what the AGI dorks tell you. Probabilistic inference can be qualitative and still helpful for guiding the practice of inference. Meehl makes a compelling argument for a dappled world of probability. Quantifying the unknown is an art form and an ethos. It can never be rigorous. And that’s ok!
Why we’re stuck. Meehl’s 1989 complaints and critiques about science and research are depressingly similar to those science reformers complain about on Twitter today. You could make the case the problems were the same in 1970. Why did we get stuck? This is my favorite question about the history of science, and the answer seems to be non-scientific factors that scientists pretend they are isolated from—the Cold War, the American Empire, hyperfinancialization, computerization. We’ve lived in a time of unprecedented technological advancement and stagnation. Though unintentional, because he didn’t want the class filmed in the first place, Meehl’s lectures helped me get more appreciation about what has flourished, what has withered, and what we might want to do next.
More to come. In the meantime, I’d love to hear what you all took away. Let me know which parts you found most interesting, and which parts you most disagreed with.
And with that, I’m going to take a few weeks “off” to finish a few other writing projects. I may pop back in here if there’s something wrong on the internet, but I don’t have planned regular blogging until late August.
And that’s because late August is when the semester begins! This fall, I’m excited to blog about my class on Convex Optimization in the age of LLMs. This should make for an interesting blog project where I’ll try to digest a very mathematical and technical topic with as few equations as possible. It should be a fun mathematical communication experiment. I hope you’ll enjoy reading along.
I think the single thing that most caught my attention and gave me an "aha" moment was the comparison of p-value testing to the chart where they switched what they were feeding the mice. I do think people get stuck in that mentality of framing every study as a hypothesis test, and seeing a concrete alternative was really helpful.
Meehl's equation is certainly something beautiful to beckon. As you write so aptly, it is a very elegant framework in which one can reconcile social studies of science with logical positivism.
I found also very interesting the points of view offered on the meaning of "prediction" and "inference". I think that it is useful to recall the notions of "intuitive mathematical theories" in contrast with "axiomatic theories" which include abstract algebras. The latter are useful for computation but, importantly, truth is not a thing in them - that is something that can only be assessed in a particular interpretation grounded on the facts of an intuitive theory. Or in other words, I can compute the state x_n that follows from initial condition x_0 and f(x,u) - a mathematical model of a (controllable) physical process - and control sequence U=u_0, u_1, ..., u_{n-1}. But the fact that I can do the computation has actually little to no bearing to whether the statement "x_n is reachable from x_0 via U" is true. That is something that can only be assessed by the intuitive theory one uses to formalize the interpretation of x's, u's and f's.
Axiomatic theories are good for computation but they cannot provide (by design) insights into the ultimate truth of the statements which we can prove to be theorems in those theories. The actual insight comes when one cannot prove a statement to be a theorem, while knowing its truth under certain conditions in the intuitive theory.
I was also deeply struck by the analogy between current and 35-year old critiques of science and research. I still think there are ways forward, and I agree with you they have to do with "degrowing" scientific outputs. I am under the impression that many institutions and colleagues around the world would agree that the value of having a publication on a certain venue has steadily decreased over time in many fields. We should foster meaningful communication (as in face-to-face in person or digitally) between researchers, and communal, transparent mechanisms to give high-quality, detailed, actionable feedback to fellow researchers. For instance, the recent launch of alphaxiv.org filled me with joy. I am not sure it is the solution, but I reckon is a bold step in the right direction.