Discussion about this post

User's avatar
David Hilbert's avatar

In addition to the precisely controlled interventions that are characteristic of psychophysical experimentation there are two other factors that contribute to the robustness and replicability of psychophysical experiments. Unlike in many other areas of psychology (and medicine), psychophysical experiments typically use a within subject design (comparing the responses of the same subject to two different interventions) which gets rid of the many complications involved in comparing across subjects. Related to this is that most psychophysical tasks can be done quickly and repeatedly by experimental subjects. That means it's possible to record hundreds or sometimes thousands of data points on an individual subject. It also means you don't need to recruit a lot of subjects. There was a common joke current in color science when I was first trying to master the basics of the discipline. A psychophysical experiment needs three subjects: the two authors plus the naive subject. This wasn't literally true but it did capture an important aspect of the literature back in the 1970s and 1980s. Thanks for an interesting post.

Mark Johnson's avatar

When I read your blog generally I understand what you're getting at almost immediately (and usually agree). In this series of posts you seem to be worried that statistics isn't a path to truth. As you have explained many times in earlier posts, often statistics is just a bureaucratic convention; a publication hurdle.

But bureaucracies, conventions and hurdles are not always useless even if imperfect. The rules of the road are just conventions, which are sometimes inefficient (e.g., I have to wait at a red light even if there's no traffic in the orthogonal direction) and certainly don't prevent all traffic accidents, but I'm pleased we have them. Likewise, while the p-value requirements are imperfect, I suspect we'd be overwhelmed with even worse papers if we didn't have them. While I expect we could improve both our statistical conventions and our traffic rules, actually doing both could be tricky.

I also see you hinting at the fact that statistical rigour has little to do with understanding something at a theoretical or pre-theoretical level. Control theory, causal models, and some economic modelling try to build models that capture aspects of the underlying phenomena. But as you've remarked in your posts, unstructured machine learning in the form of GenAI is where all the money is today.

The fields of psychology, psycho-linguistics and linguistics overlap a lot in terms of the phenomena they cover. Linguistics - perhaps because of Chomsky's influence - is very theoretical and famously numerophobic, while psychology tends to be very atheoretical and rely on statistical methods. I think linguistics has discovered many interesting facts about human language, but without any statistical information it is hard to tell if any specific claim is reliable.

10 more comments...

No posts

Ready for more?