Meehl’s Philosophical Psychology

Introduction: Blogging Philosophical Psychology

Lecture 1 [YouTube]:

  1. Everything Inherently Meta - A historical overview, starting with logical positivism.

Lecture 2 [YouTube]:

  1. Popperian Falsification - Popper’s program for the logic of science.

  2. Inconvenient Facts - Why it might be rational to not abandon theories in light of falsifying evidence.

  3. Risky Predictions - The role of prediction in corroborating theories and quantifying what makes a prediction surprising.

Lecture 3 [YouTube]:

  1. The ouroboros of discovery and justification - Why it’s necessary to account for psychological and social factors when assessing scientific evidence.

  2. See What We Want to See - The context of discovery in experimental outcomes. Why some labs consistently find things others don’t.

  3. Asystematic Reviews - The context of discovery in the scientific literature. Why we can only glean a partial view of the scientific landscape from papers.

  4. Overhead Projections - The absurdity of indirect costs and neoliberal university rent-seeking.

  5. Scholar or Publicist - How the pressure to fundraise impacts academic publishing and turns scientists into marketers.

Lecture 4 [YouTube]:

  1. An Iteration Between Theory and Practice - Theories are never true, just reasonably true. They can be patched in the light of falsifying evidence. The example of the kinetic theory of gasses.

  2. Boundary Conditions - Patching theories by predicting new observations. Understanding that there is only so much you can include about the universe before you run out of compute. Long derivation chains with too many free parameters run out of use value.

Lecture 5 [YouTube]:

  1. Lakatosian Defense - The scientific method in all of its arational glory.

  2. Trench Warfare - Using all of the examples we’ve discussed so far to flesh out the mechanics of Lakatosian Defense.

  3. Les Atomes - Perrin’s arguments for the existence of atoms. How explaining the same thing thirteen different ways counts as a “damned strange coincidence.”

Interlude on software and science

  1. Scientific Versus Statistical Prediction - The role of software, prediction, and “AI” in contemporary science. And expanding upon why Meehl thinks science leans so heavily on prediction.

  2. Software as a Disservice - All scientific validity now rests on software being correct. It infects every stage of a derivation tree, and inflates the complexity of Lakatosian Defense. Might software acceleration paradoxically decelerate progress?

Lecture 6 [YouTube]:

  1. Obfuscating Factors - Diving into Meehl’s “Why Summaries of Research on Psychological Theories are Often Uninterpretable.” Going through the first obfuscating factors that arise from sloppy derivation chains.

  2. Some of my best friends are statisticians - Meehl’s thoughts on statistical rigidity and losing the plot. Statistical tests might be useful to overcome skepticism, but they lose their value if they become ritualized.

  3. Brain Games - The delicate balance between rules and play in research and two ideas for tilting us in a more playful direction.

  4. Underpowered - The bizarre rituals of power calculations. A review of how they work in practice so we can understand how they play out in social scientific studies.

  5. Power Posing - Some of my confusion about why insufficient power is an obfuscator. If most studies were underpowered, wouldn't we see fewer papers?

  6. Crud - In the biological and social sciences, everything is correlated with everything. The question is, how much? I define Meehl’s crud factor and give some evidence that it is large enough to worry about.

  7. Correlation Is All We Have - I show that null hypothesis tests can only measure correlations. Specifically, the z-score is just the correlation between treatment and outcome times the square root of the sample size. This is a huge problem amid ambient crud.

Lecture 7 [YouTube]:

  1. The Technical Depths of Crud - Following Meehl’s lead, I did some technical calculations of cruddy correlations. There is no shortage of cute mathematical puzzles. These calculations highlight how we can’t escape crud factors. This post is mostly for book keeping and stage setting.

  2. Imperfect Assessments - Meehl’s final two obfuscators: pilot studies and detached validations claims. Both of these highlight the impact of imperfect measurement on published research findings.

Interlude on crud.

  1. Crud Hypothesis Testing - How would statistical testing look different if we took crud seriously? I construct a plausible “crud null hypothesis test” and look at how we’d reject this. It requires estimating ambient background correlations and then precisely estimating correlations between variables we’re interested in.

  2. A Credible Junta - Can current causal inference methods mitigate latent crud? No. They are also just based on correlations. But they establish an artifice of credibility through a complex esoteric craft of storytelling.

Lecture 8 [YouTube]:

  1. Towards Interpretable Literature - Meehl’s suggestions to improve the scientific literature may sound depressingly familiar: Improving reproducibility, moving beyond hypothesis testing, and publishing less. In this first post, I talk about reproducibility and actually have optimism. Even though it’s a slow and painful process, we’re making progress.

  2. Replication Versus Reproduction - Replication and reproduction are disparate concepts. We should be pedantic about the distinction.

  3. Reproducing The Blue Screen of Death - Perfect bit-for-bit reproducibility is impossible, but this doesn’t mean reproducibility is hard. Or in crisis. So while in the last post I argued for pedantry, in this post I say if you know, you know. Because it’s Tuesday on the internet.

  4. Growing Evidence - To inspire the rejection of significance testing in the human-facing sciences, I look at impressive discoveries in the human-facing sciences. Focusing on studies of growth, I find examples of doing experiments where the significance test not only plays no role, but actually distracts from the topline result.

  5. I don't care what the studies say - If all studies are wrong or uninterpretable, why on earth should we use them to make policy decisions?

  6. You're gonna run when you find out who I am - Though lots of proposals have been floated to damp rampant overpublication, the problem has only grown exponentially since Meehl. I wonder if it’s a sociopsychological problem, rather than one of simple “incentives.”

  7. Acting On the Unknowable - Meehl concludes Lecture 8 emphasizing that some questions unanswerable by science. I tie this to some work in social science about the hard limits of technocracy and a modern spin on Popper’s piecemeal social engineering.

Lecture 9 [YouTube]: (This lecture starts at minute 82 of Lecture 8.)

  1. Degrees of Disbelief - A glimpse into the weird world of mapping beliefs into numbers. That is, the philosophical problem of probability. This has been a favorite topic on the blog, and I don't find myself getting less confused.

  2. The Algorithmic Subjectivist Myth - There is no rigorous way of computing numerical probabilities of one-off future events. Let me explain why by showing there’s no rigorous way of assigning probabilities to the past.

  3. Holy Wars... The Probability Two - Is it a fluke that we use the word probability to describe both relative frequencies and degrees of belief? It turns out the two notions are inescapably linked.

Lecture 10 [YouTube]: Clinical versus Statistical Prediction (This lecture starts at minute 74 of Lecture 9.)

  1. Part I - In a 1954 book, Meehl made a radical case for decision-making with machine learning that resonates even more today than it did then. I'm going to spend this week working through his argument.

  2. Part II - When are statistical predictions better than human judgment? It strongly depends on how you frame the question.

  3. Part III - After reviewing a century of evidence finding statistical predictions better than clinical judgment, I propose a simple explanation of this phenomenon.

Lecture 11 [YouTube]:

  1. Inference and The Psychoanalytic Interview - Is psychoanalysis scientific? The answers to this tricky question reveal so much about our insecurities.

An initial conclusion, but by no means my last thoughts on the class: Theories of Metatheories.