This post digs into Lecture 3 of Paul Meehl’s course “Philosophical Psychology.” You can watch the video here. Here’s the full table of contents of my blogging through the class.
You’ve made it to part five of Lecture 3, so you can probably already guess how indirect costs and neoliberal university rent-seeking shape research and its associated literature. But perhaps we can share a catharsis in one final reckoning with the context of discovery.
As Meehl puts it, “You don't know, when you look at what has surfaced, to what extent the experiments in a given domain came to be performed because of the financial and other pressures upon academicians.” Explicitly or implicitly, academic scientists have to raise funds to do their research, and such funds are scarce. Consequently, faculty will work on only what they think can be funded.
As any young investigator knows, federal funding has become competitive. Proposals at the National Science Foundation or the National Institute of Health are fiercely reviewed by other scientists and infrequently awarded. This review process means that a scientist might choose to play the odds, jumping on every academic trend and throwing their hat in every call for proposals they come across. It also means that scientists must spend their time marketing their ideas to convince their peers their work deserves one of the few rare, prestigious grants.
Meehl argues that funding scarcity also shapes the sorts of projects that people propose, forcing faculty to favor expedience over curiosity. Scientists are compelled to choose the cheapest path to a result. This means that the particular method appearing in a paper is frequently the cheapest, not the most scientifically appropriate.
Well-funded projects come with their own special problems. If a sponsored project grows too large in cost, it becomes too big to fail. If you run some field study with hundreds of staff and hundreds of thousands of participants, the massive expenditure compels you to find some evidence that your intervention did what you said it would do. Does this mean that investigators write up the results of big projects in a way to save face? Does this mean there are incentives to continue to look for evidence of results that aren’t quite there? You’ll have to be the judge when you read such papers.
And what about projects that go against a party line? Are these less likely to be funded? Meehl recounts witnessing overly zealous scrutiny applied to edgier proposals that were out of favor and that scientists “feel they've got to research what the bureaucrat in Bethesda wants researched.” I know many still believe this to be true. Even such circumstantial evidence adds doubt to one’s assessment of the scientific literature.
Meehl doesn’t discuss this, but we obviously run into similar problems with non-governmental funding sources. Gifts from philanthropists are targeted at pet causes. Gifts from industry are dependent on industrial interests.
You might think that we should look outside the academy for less harried investigations, but industrial research, which has been growing steadily in computer science for the last decade, has its own biases. There is unquestionably a filter on the questions asked by researchers who work in industry. Industrial papers have to pass internal corporate review before being published. There have been notable blow-ups of people getting fired from industrial labs for not towing party lines.
Now, patronage has always been part of science, but there is something particularly pernicious about our contemporary model built around constant, vicious competition. As I mentioned in passing, the constant competition with peers for scarce funds means scientists are constantly marketing, and this mindless scientific marketing may be the most damaging aspect of all of this.
Every proposal, paper, and presentation becomes a marketing promotion. The reader has to work through a startup pitch before getting to the main findings. If a clinician or practitioner knows that every publication is a sales document, their interpretation of every result becomes more critical and suspicious.
David Graeber points to this marketing, which has “come to engulf every aspect of university life,” as a primary source of stifled innovation. In his essay “Of Flying Cars and The Declining Rate of Profit” from his 2015 collection The Utopia of Rules, he asks why progress in science seems to have slowed since 1970. In academia, he calls out marketing as a central pernicious force:
“There was a time when academia was society's refuge for the eccentric, brilliant, and impractical. No longer. It is now the domain of professional self-marketers. As for the eccentric, brilliant, and impractical: it would seem society now has no place for them at all.”
Graeber concludes that when scientists spend their time marketing, competing with their peers, and choosing expedience over curiosity, we end up in a world of scientifically overproduced incrementalism.
“That pretty much answers the question of why we don’t have teleportation devices or antigravity shoes. Common sense dictates that if you want to maximize scientific creativity, you find some bright people, give them the resources they need to pursue whatever idea comes into their heads, and then leave them alone for a while. Most will probably turn up nothing, but one or two may well discover something completely unexpected. If you want to minimize the possibility of unexpected breakthroughs, tell those same people they will receive no resources at all unless they spend the bulk of their time competing against each other to convince you they already know what they are going to discover.
“That’s pretty much the system we have now.”
Welp. On that cheery note, we’d better get back to the tidy abstractions of philosophy next post…
I agree with virtually everything in this and the previous post on indirect costs. However, part of intellectual marketing is recruiting members of the research community to assemble a research road map of topics that we believe are fundamentally important. Then selling this to the funding agencies and to our colleagues who will be reviewing our proposals. In this case, university researchers can set the agenda and direct research in the right directions.
That said, I think we should explore alternative funding models. For example, could we give every reasonably good faculty member a small annual grant (exempt from F&A charges) to spend as they see fit without requiring a proposal? Enough to fund one student and one month of summer salary. Call these "Innovation Grants". Under the current system, every reasonably good faculty member eventually gets such a grant (often as part of a larger team), but at great expense in time spent grant writing, selling, and reviewing. Faculty could re-qualify every 5 years based on measures of creativity rather than "impact" (i.e., publications). Explicitly reward people pursuing novel directions. Removing the F&A charges would remove the incentive for universities to add faculty just to get the F&A money (which was a big problem when NIH expanded its grant programs).
We could incentivize larger collaborations by telling faculty that if they combined their individual grants into a consortium to address some bigger question, the consortium could be eligible for additional funding for infrastructure, technicians/software engineers, etc. This would be a purely bottom-up system. (There may be some bugs to be worked out in this idea.)
The product of science is the result of a communal effort. So the incentives that affect individual scientists don't necessarily affect in the same direction the product over time as a whole. In particular, even though it doesn't pay to go against the tide, some scientists do it, and a few produce major new findings (and this is also sometimes rewarded with major prizes). Their impact on science may be larger than the many others who do follow the fashion of the day.
Part of the issue is that there are a lot of scientists, probably more than the optimal number from the point of view of optimizing the allocation of resources to advance science. This is due to the proliferation of higher education and the connection between teaching and research. A lot of science has to be exploration of relatively low importance, but perhaps there's too much of that, beyond the level that is necessary to facilitate great discoveries.