A position on positions
The complex evolution of academic process doesn't always lead us to better practice.
I’m endlessly fascinated by the interplay between the written and unwritten rules in bureaucratic systems. Even though we might argue that written rules with detailed minutiae make things more transparent, convention often sets practice far more impactfully than long rule sheets. Nowhere in my experience is this complex mix more evident than in academic publishing.
Academics love to make all sorts of arbitrary rules for themselves. A lovely example is the inane, extended NeurIPS paper checklist, now required as an attachment for all submissions. But most of the rules governing the production of NeurIPS papers evolve organically as we stochastic parrot other people’s papers into new papers using our wetware. We never needed LLMs to generate an exponentially growing stream of boring prose. We never needed an LLM to decide to add bold to our results tables to brag about our methodical superiority. We figured out how to copy each other’s styles and guessed what was “expected” by human pattern recognition and gossip.
Another system that has grown organically is the arXiv, a celebrated preprint server for math, physics, and computer science. The role has strongly shifted over my academic career. My early work was physics adjacent, and arXiv grew out of the physics community. It existed so that physicists and mathematicians could share their work while they waited out the interminable paper-review processes. arXiv served as a central clearinghouse for working papers, with the idea that they’d eventually get an official publication stamp, be it in a year or five. When I started writing machine learning papers, preprinting was rare. Conference review cycles were much shorter than the journal vetting in physics and math journals. And people who wanted to talk about their work before publication just posted papers to their personal websites. But by the early 2010s, my colleagues and I were posting all of our machine learning work to arXiv. Eventually, arXiv became mandatory. Today, papers not posted to arXiv don’t exist. Your citation count goes to zero unless it’s on the arXiv. Or at least this is what people tell me.
What happened here? How did a preprint server for working papers become the central hub for all academic work in machine learning?
There’s no simple answer! It’s similar to how Google Scholar replaced people’s publications.html pages. These things happen organically and are not necessarily a sign of “progress.” That universities have decided to crack down on ~/username pages in the name of increasingly less competent “security” measures is another sad example of negative technological progress. Another example of clear regression is Overleaf, a garbage, bloated webapp for writing LaTeX, now unescapable if you want to collaborate on a machine learning paper.
Anyway, I was thinking about these conventions because I can’t escape the latest dustup about position papers and the arXiv. The short version of the story goes like this: at some point, to maintain a level of sanity, the arXiv added moderators to ensure the material posted was plausibly academic. I understand the need to prevent spam, but moderation defeated the original freewheeling spirit of the website. As usual, we can’t have nice things on the internet.
It got worse. Recently, the arXiv found a major uptick in AI-generated “review” papers that seemed to only exist to juke other people’s Google Scholar citation counts. Malicious forces were scamming two free resources at once. The moderators of the arXiv decided they needed to put a stop to this practice and imposed a ban on “position and review papers.” Technically speaking, this ban was already written in the rules. But the announcement meant they would enforce the written rules more harshly.
People were upset. How would people see their position papers if they couldn’t be posted to arXiv? Now, you might ask, what is a position paper? This is a great question. A cottage position-paper industry had grown in the machine learning and AI ecosystem. Conferences like ICML, ACL, and NeurIPS now have whole tracks for position papers. As the NeurIPS call puts it: “Position papers make an argument for a viewpoint or perspective about what should be done, in contrast to research track papers, which report on advances that have already been accomplished.”
Now, I have so many questions about this trend. My first question is short: Why? Why should this exist? I haven’t received a good answer. If you want to show that research should be done, why not do it and add a discussion to your paper? A demonstration, no matter how small, can be done through an experiment or through some mathematical and theoretical development. Moreover, does anyone in 2025 still think that research can be done without having a position? All research comes with experimenter biases and preferences. All research papers have a position. On the other hand, position papers don’t have research. Paraphrasing a conversation with Suresh Venkatasubramanian, if you want to write a position paper saying “someone should do something,” then you should just write a regular paper saying “I did a thing.”
Please feel free to yell at me in the comments about this, um, position of mine. I’m all ears for a justification of why position papers need to exist. If you don’t want to do the research, you can blog or tweet or stand on a soapbox. You don’t need to generate a pdf to add to your Google Scholar profile.
But the weirder part of the preprint scandal is that arXiv says it will let you post position papers if they have been peer reviewed. This also seems like an absurd half-measure to tamp down complaints. What does it even mean to peer review a position? Apparently, the NeurIPS track decided to only accept 6% of the submissions. What part of the peer review process deems these papers better than the other 94%? A peer review of opinion pieces seems wildly counterproductive to the whole argument for the track in the first place.
And I’m also shocked by the idea that you would post to arXiv after having a paper published. The preprint server is now a post-print server? This is a weird place we’ve found ourselves in—arXiv now functions as a social media site. Every paper has to have a blog and a twitter thread and an arXiv post. You can schedule them all using Buffer. A paper “won’t get seen” unless it has an arXiv link. We’ve certainly evolved a weird set of practices for ourselves. Academic machine learning and artificial intelligence, a field that is drowning in interest and money, is now overburdening a non-profit preprint server by using it as both academic social media and a directory of paper permalinks.

