This post digs into Lecture 4 of Paul Meehl’s course “Philosophical Psychology.” You can watch the video here. Here’s the full table of contents of my blogging through the class.
I’ll use the term boundary conditions to describe the various particulars of the world that go into some predictive calculation. These are our assumptions about entities, like their mass, speed, or even existence, that we plug into our theory when we make predictions. For the ideal gas in a chamber from the previous post, we needed to measure temperature, volume, and the amount of gas to predict the pressure on a piston. But why only these quantities? Why did we not need to consider the particularities of the piston we used to measure the pressure? Did we need to consider the composition of the walls of the chamber?
Boundary conditions are always idealizations because of omission. Such omissions are necessary because we have a finite amount of computation speed and memory. Moreover, we begin any prediction with the limited evidence collected and tabulated by our predecessors. Now, when outcomes don’t agree with our predictions, hence falsifying the theory, could we blame the failed prediction on omitted evidence? That is, can we blame it on the boundary conditions?
A favorite story of philosophers of science (in particular of Imre Lakatos) is the discovery of Neptune. French astronomer Alexis Bouvard published observations of the motion of Uranus that seemed to violate Newton’s Laws. This meant that either Newton’s Law changed the further you moved from the sun or there was an unaccounted-for massive body in the solar system. Positing the latter in 1845, Urbain Le Verrier predicted the location of the missing planet and sent his predictions to the Berlin observatory. Lo and behold, there was Neptune, within a degree of where Le Verrier said it would be.
This is again a remarkable corroboration of the theory made by predicting an idealization was incorrect. Note that there were two main choices here: adjusting the theory to account for distances (like what van Der Waals did) or keeping the theory as is and predicting the facts were wrong. But in either case, Newton’s Laws were never abandoned because physicists were so enamored with the elegance of their theory.
This Neptune example may have had too much influence on astrophysics. Now, when the laws of gravity seem screwy in our telescopes, we just imagine there’s something else out there we haven’t seen. Since the 1960s, we’ve observed countless aberrations in our measurements that imply either our telescopes are broken, general relativity is wrong, or there is a vast amount of matter out there that we can’t see. Since we can’t see it, we might as well call it dark matter. The current consensus is 95% of the energy content of the universe is made up of “dark stuff” (see also CERN). I dunno, that seems like way too many unobserved Neptunes to me. But I guess adding parameters to keep the old model on the books is easier than starting over from zero.
The hard core
All of the nice examples come from physics, but there seems to be a generalizable story emerging here. Every scientific field starts with some simple core ideas (what Lakatos calls “the hard core”). Meehl argues that this core will be present in most derivation chains associated with a given theory. For the initial breakthroughs in the field, that hard core alone will neatly and quickly predict the outcome of several experiments.
But then we’ll start to pile up experiments that don’t agree with the theory. Rather than abandoning the hard core, we add some other theories to patch things. As any good machine learning scientist knows, if you add enough parameters, you can explain almost anything. In the kinetic theory of gas model, we added substance-specific parameters to handle low volumes and temperatures. We give every gas a couple of degrees of freedom and are still able to manipulate pneumatic systems as long as we account for the extra parameters. In the Neptune example, we predicted an unseen planet. We can predict a lot of unseen stuff in the universe and get a cosmology of dark matter.
We add complexity to the theory to protect the core. Longer derivation chains, More post hoc parameters. Longer computer simulation code. More complex statistical analysis. Expansive mathematical proofs. Everything gets longer and more complex. But we never give up on some fundamental principles in the hard core. Physicists never give up on conservation of energy. They don’t give up on general relativity. Even though they keep collecting countless observations that falsify their theory, we patch things up by adding a few degrees of freedom here, there, everywhere. These extra explanations are what Lakatos calls “the predictive belt” of a theory.
But the computational crud gets you in the end. At some point, these calculations stop being, dare I say, useful. Astrophysics doesn’t get called out because it doesn’t matter if there’s dark energy or modified Newtonian dynamics or aliens playing games with our telescopes. None of this will help us build better computers or launch more satellites. Without that turn to practice, sciences can go and chase whatever weird facts they want to chase until governments stop funding their supercomputers or supercolliders.
But when you turn to practice you can see all sorts of predictions becoming intractably complex. My favorite example, Nancy Cartwright's Perturbation of Galileo1, is again from physics. Surely, if we drop a bowling ball off the Leaning Tower of Pisa, we can predict where it will land and how long it will take to get there. You’ll take Newton’s laws and a simple air resistance model, and you’ll get a good prediction. But what can you say about a euro bill dropped off the same tower? How accurately can you predict where that bill will land or how long it will take?
Think about what you’d need to compute this prediction using Newton’s Laws. It’s impossible. The dynamics of this system aren’t technically chaotic, but you’d still need unfathomable precision about the initial conditions of the bill and molecules in the air. I never know what we're supposed to take away when someone argues for a simulation that needs more FLOPs than atoms in the universe.
I highlight this to flag something that Meehl doesn’t discuss. Verisimilitude, that is, approximation of truth, assumes that you can compute all possible derivation chains of a theory. But part of verisimiltude is how much computation is required to approximate truth. Part of what makes theories useful is not just the ability to make accurate risky predictions in theory but to make accurate risky predictions quickly in practice. This prediction efficiency strongly influences whether we use quantum mechanics, classical mechanics, statistical mechanics, fluid dynamics, or just logistic regression. What if expedience is actually essential to truth?
Loose ends
Meehl ends Lecture 4 and begins Lecture 5 with nomological nets. These were proposed by Cronbach and Meehl in their development of construct validity. At a high level, there are three parts of a nomological network: (1) There are six scientific concepts (substances, structures, states, events, dispositions, and fields). (2) A theory is built by defining concepts and linking them together (through statements about composition, dynamics, or history). (3) These links form a graph, and the graph is scientific only if the leaf nodes are observational. I haven’t figured out how to work nomological nets into this series just yet, but will try to expand upon them in more detail when they are referenced in later lectures.
From Chapter 1.1 of The Dappled World.
There must be a term for this, but essentially theories have certain "spans". A theory describes reality using its parameters within a viable "range" for each parameter. Beyond that range, the theory breaks down. We appreciate most the theories with the largest spans, i.e. they cover vast ranges in observed phenomena, from micro to macro scales. When a theory has a constrained span (i.e. it only works in these narrow set of cases, but not in these other observed ones), we typically look elsewhere for a more "elegant" theory. Thus we want theories which are concise (few degrees of freedom) but also maximally generalizable, which perhaps in a word would make them highly "efficient" theories.
My curiosity was piqued re: dark matter
https://www.reddit.com/r/AskPhysics/comments/10genmw/why_has_dark_matter_been_widely_accepted_rather/