13 Comments
User's avatar
Rob Nelson's avatar

You and Munger are my favorite writers tackling this problem by describing it in plain language that makes visible that which is obscured by an elaborate structure of incentives that is hard to see, even for those climbing it to named professorships. Their complexity makes it difficult to see from the outside. And the incentives prevent those on the inside from understanding what's happening because, to use the Upton Sinclair line, their salaries depends upon not understanding it.

I disagree that "No one knows why we do this." There is a long, and probably, too boring story about guilds that go all the way back to the founding of Bologna and Oxford. The question is why have these feudal forms of knowledge production and transfer persisted so long in a modern world dedicated to rationality and objectivity.

My long answer involves what Thorstein Veblen called "trained incapacity," a recognition that the great benefits of academic freedom has a few drawbacks, and to take seriously what Dan Davies has to say in The Unaccountability Machine. And to keep reading you and Munger.

Expand full comment
Ben Recht's avatar

Thank you! At some point, I do need to push my reading of academic history back beyond 1890. I bet the story of the early academic guilds and monks isn't all boring.

Just a small clarification: when I wrote "No one knows why we do this," I was specifically referring to the bizarre practice of evaluation for promotion from tenured Associate Professor to tenured "Full" Professor. We really should do away with this additional burden on everyone involved.

Expand full comment
Rob Nelson's avatar

Ah, well. My misreading gave me an excuse to broadcast my rant and my appreciation!

I have always assumed associate professor exists to serve as a status marker for the relatively youngish members of a department. The burden falls on the freshly tenured (who won't complain!) and those who need to be made to feel slightly inferior because they did something other than climb a bit higher up the reward structure. It also serves the management purpose of capping the pay of some subset the instructional staff, even if you can't get rid of them.

As long-time bureaucrat and adjunct member of a few different tribes, my views are suspect.

Expand full comment
Maxim Raginsky's avatar

Peer review is incompatible with scale. We need to pick one. My spicy take is that, if we want peer review, then we need to bring back elitism. Otherwise, we're doomed to keep wasting our time peer-reviewing the glut of papers written by every Joe with a GPU out there.

Expand full comment
Ben Recht's avatar

Now do tenure letters...

I dunno, I think we could "scale" with the right structures in place, but flat scaling is indeed impossible.

Expand full comment
Molly Trombley-McCann's avatar

From your perspective, how do peer review and hierarchical promotion practices play out in terms of false positives and false negatives? Based on my conversations with friends in academia, and my sense of "the discourse", there is frustration about false negatives - good work that is unexpected or disruptive to current models is too hard to get through, and valuable work showing lack of results also doesn't tend to get through. But I'm guessing it's more complicated than that. I'd love to hear how you would describe peer review from that lens.

Expand full comment
Paul Beame's avatar

Academia functions as a marketplace of ideas. In science and mathematics we have the benefit of certain widely accepted and standardized minimum criteria required for those ideas. Part of the peer review process clearly functions to check that those minimum criteria are met. This seems an entirely positive aspect of peer review*.

The success of research as a whole does not merely depend on individuals or groups producing good ideas for the marketplace. It also depends on an attention mechanism for those ideas. Without attention, ideas already in the marketplace will simply be unnoticed and the synergies between ideas already in the marketplace needed to create new ones won't happen.

This results in an inherently entrepreneurial aspect of academic research. Researchers need to market their ideas to ensure attention for them in the marketplace. They also need to market themselves in order to achieve academic success, to lay the groundwork for attention for their future ideas, and to provide avenues for research collaborations with other researchers.

On top of its critically important criteria-checking aspect, peer review has been our main and best attention mechanism in the marketplace of ideas, so that it isn't all up to marketing. This includes all of the aspects that you view as peer review. As an attention mechanism, peer review has flaws that essentially any attention mechanism is likely to have: it will miss good work to which attention should be drawn and it will over-emphasize some work that is less valuable (no matter how you assign value).

One aspect of peer review that some have cited as its biggest negative has largely disappeared with the advent of the web and online publication.

However, in some areas of research, the scale of the production for the marketplace of ideas is so vast that careful peer review cannot keep up on the timelines we have set for it. Attention decisions have such inherent randomness, while we have kept old standards for success, that over-production is incentivized, merely to have something successfully published, leading to a further spiraling upward in scale and randomness. (The most careful experiments on the flaws of peer review were done in these areas but that does not mean that their results necessarily translate to other fields.) Because of the weakness of peer review in these fields, the true attention mechanism is now given by other kinds of marketing such as social media posts of certain leading researchers and others with large groups of followers. A lot of the goal seem to be to influence later peer review decisions.

In other research fields, peer review still seems to provide extremely useful signal as an attention mechanism; important failures of omission, the most serious kind, tend to be quite rare, which is one of the reasons such failures are so memorable.

Peer review, at least in the way that it is currently practiced in many fields, seems to have some key characteristics that one would want in an ideal attention mechanism for the research marketplace. Decisions about attention are:

- made by those who are knowledgeable about the research subjects

- diversified, in that a small number of decision-makers do not control all decisions, and a wide range of those with expertise can contribute

- made under circumstances that allow decisions with sufficient care.

What other features would you want? The other candidates you mentioned are clearly worse along these dimensions.

The thorniest issue with peer review as an attention mechanism seems to be how broadly we allocate attention with it. Where do we place the cut-off? Do we need a cut-off at all?

You might say no to a cut-off, but human attention is very limited so an in-practice cut-off surely exists. (As an example, SciRate (https://scirate.com/) doesn't have any cut-off, producing an unbounded ranking instead, but I expect that knowing that there isn't a cut-off causes people to look at less of the list than they would if they knew that there was a cut-off.)

---------------------

* The quality of this check itself adds much of the value of a journal publication versus a selective conference publication. It is the main reason that the X% difference required by some journals vs conference publication is misguided in many instances.

Expand full comment
Alex Holcombe's avatar

Peer review is a mixed bag. One of its harms is that it is used for gatekeeping, which prevents good work from getting out, and in other cases it is used to warp work to fit dogma (editors tell authors how they have to change their work to get published). But this has been ameliorated by the rapid rise in use of preprint servers. Across several fields, more and more scientists consume recent work without having to wait for the time-consuming process of peer review and the sometimes counterproductive rejection decisions that it justifies. While so far this effect is to some extent modest, some of us are contributing to new peer review models that embrace preprints and divorce peer review from gatekeeping, to further reduce its harms (e.g. the Peer Community Ins, and MetaROR).

Expand full comment
JZ's avatar

What is the value of peer-reviewed journals in the age of arxiv/blogging that could get impact/likes/feedback/iteration so much faster, or would you consider those 'science' at all. (I'm looking at areas where most academic discourse seem to occur on Twitter or LessWrong)

Expand full comment
Rob Nowak's avatar

"in the hierarchical review examples, the evaluated knows every department or professor that’s evaluating them". not always. typically, the identities of promotion letter writers are not revealed to the person being evaluated.

Expand full comment
Ben Recht's avatar

Of course, but you know who is in the department. That's never secret.

Expand full comment
Rob Nowak's avatar

yep. but in my experience the department heavily weights the opinions of the secret letter writers

Expand full comment
Rob Nowak's avatar

i think this is an important element of the promotion process. i only wish letter writers would provide more critical assessments sometimes... looking at you Rob Nowak

Expand full comment