This post digs into Lecture 5 of Paul Meehl’s course “Philosophical Psychology.” You can watch the video here. Here’s the full table of contents of my blogging through the class.
Since I spread the argument out over the last post, let me tidily summarize Meehl’s metatheory of Lakatosian Defense. An experimental outcome is deduced from a collection of scientific assertions in a derivation chain. The derivation chain uses clauses from the core theory (TC), assorted auxiliary theories from the discipline (AT), auxiliary theories about the instruments (AI), a ceteris paribus clause (CP), and a variety of experimental conditions (CN). From the logical conjunction of all of these clauses, we logically deduce the prediction “If I observe O1, then I will observe O2” (O1⊃O2). We choose experiments such that, with the background information we have, the probability of O2 conditioned on O1 is small. Together, we get the expression:
If we observe O1 and O2 in our experiment, the low probability statement corroborates our theory. The smaller pE, the more the theory is corroborated. If we observe O1 but not O2, our derivation chain is falsified and we look for something to blame, moving from right to left.
Today, I want to cast the examples I’ve listed so far in this series as stages of Lakatosian defense to illustrate why this framework elucidates the role of falsifiers in scientific development. In the next post, I’ll discuss corroboration.
Experimental conditions
Meehl’s description of the controversies in latent learning shows how arguments about experimental conditions alone can keep scientists busy for years. He described squabbles about the right way to manipulate rats before setting them loose in a maze. It mattered how you carried it from the cage to the maze. If you held the rat by the tail, it would be less likely to cooperate than if you had let it gently sit on your forearm while petting it. If you gave the rat a little food in advance, it might perform better than if it went in hungry. Every tiny adjustment mattered. Whichever side of the latent learning debate you were on, you could criticize the other side by arguing about CN.
Wars about experimental conditions, as Meehl says, are usually focused on replication. It is replication in the narrowest sense here:
“You are not denying the facts. You’re denying that something is a fact.”
You attack CN by asking if someone can create the same conditions that a paper reports on and see the same outcome. A more general sort of replication, seeing if a result translates to slightly different contexts, takes us one step up the Lakatosian Defense Hierarchy.
Ceteris Paribus
Ceteris paribus clauses are so general that fields can spend decades fighting about them. Here, we ask if something outside of the derivation chain is responsible for the experimental outcome. Ceteris paribus clauses assert that we assume the derivation chain is true “everything else being equal.” But of course, such a bold statement is never true literally true. We can’t really control everything in an experiment. The question is always whether we have controlled things enough.
Most of the arguments about “DAGs” in causal inference are attacks on CP. If an unspecified confounding variable causes both O1 and O2, then the experimental association is spurious. The confounder violates CP. Arguments about selection biases (like Berkson’s paradox) are also attacking ceteris paribus, as they argue the association only holds on the very specific population collected for the particular experiment.
But the ceteris paribus clause is even more general than this. CP is where we store all of our idealizations. You assert certain things you know are true are not relevant to the outcome under the prescribed experimental conditions. Since you know these idealizations might be false, you can use CP to your advantage in a Lakatosian defense. You can try to explain the outcome away by adding an additional auxiliary theory up the derivation chain to fix the CP violation. Perhaps there was a matter of fact about the universe you weren’t aware of in your initial experiment, but the observation is explained once you add the fact to your derivation chain. This was the case with the discovery of Neptune. Or, perhaps there were idealizations about facts you knew were present but thought had low influence. Incorporating these facts might fix up the experimental correspondence. This was what happened in the kinetic theory of gasses with van Der Waals corrections. In both of these, we’re modifying the “these facts don’t matter” clauses, CP, to transform a falsifying experiment into a corroborating experiment. But to do so, we had to bloat up the auxiliary theory clause with more facts and parameters.
Instrumental auxiliaries
Meehl emphasizes that in his characterization, instrumental auxiliaries are only those outside of the discipline. But where you draw disciplinary boundaries can be tricky. In the example of Eddington fudging his analysis of light deflection, are telescopes inside or outside the theory of astrophysics? I might argue that the telescopes and photographic plates are governed by terrestrial physical concerns, not grand theories of cosmology.
What about Dayton Miller’s observations of aether drift? While many people questioned the functionality of the Michaelson-Morley interferometer, Miller’s apparatus was harder to attack because Miller was such a careful experimentalist. In the end, the results were explained away as thermal artifacts. Was this a violation of ceteris paribus or a problem with the instrument? I suppose we could say it’s both.
I bring this up because I want to talk about software and statistics, which messily infect all of the clauses in the Lakatosian Defense. I’ll say more about this in a future post.
Theoretical auxiliaries
The final stand of a Lakatosian Defense attacks the theoretical auxiliaries of a core theory. As I mentioned, adding auxiliary theories is a common part of Lakatosian defense. We let ourselves explain facts by adding conditional characterizations of when certain approximations are valid. But removing auxiliary theories—declaring them false—is much more rare. In fact, I’m hard-pressed to find good examples, though I’m probably just not thinking hard enough as I write. For what it’s worth, Meehl doesn’t give any clean examples of experiments messing with theoretical auxiliaries directly. If you have any fun examples of attacking auxiliary theories, tell me in the comments!
The reason it’s hard to come by attacks on auxiliaries is removing them messes up all of your past results. Removing an auxiliary will not only invalidate a bunch of past derivations, but it will also cause your theory to disagree with past experiments. You’ll have to explain that away, too. Meehl argues that scientists will trade some contradictions with the past with a bunch of new damned strange coincidences in the future, but he doesn’t give any examples. I’ll try to pin this all down in Lecture 6 when discussing Lakatos’ notion of the “Hard Core” and “Protective Belt” of a theory. But before we get there, I want to get into the second part of Lakatosian Defense, describing what happens when experiments corroborate your theory.