Really appreciate the candor. Need more of this. It boggles my mind that so many academics who have "made it" seem psychologically unable to do anything but continue to seek the high score in the video game that is google scholar.
My suspicion is that there is a cultural element to this, that this is a distinctively American pathology (with the caveat that the world is becoming Americanized). European journalists sometimes call me because of my book about how US politicians are so old, and while I have structural/institutional explanations, they're looking for *psychological* explanations:
Why do they keep running? Why don't they just retire and enjoy the perks of a good life?
This question is basically nonsense from the perspective of mainstream, meritocracy-pilled America.
Wholeheartedly agree. I have a methods follow-up question. How do you, as a political scientist, separate structural/institutional from sociopsychological explanation? Are these separable?
Also, making a note that I need to read your book.
Heh I don't really think these are separable in any sense that approaches scientific. The institutional factors, we hope, are something that we can learn about and change.
But, in this case, no one designed these academic institutions, and no one can really change them. So to speculate, it seems like the cultural->psychological factors in the US then caused the situation with academic institutions we're in now, with the hyper competitiveness, which only reinforces the psychological experience you describe so well in this post
Yes, to all of this. I have been saying more or less the same things on twitter dot com for quite some time, and have been on the receiving end of the performative accusations of "speaking from the position of privilege." And the pandemic lockdown also brought these points home for me in a big way.
I empathize with those who argue for privilege checks, but it's a bit weird that those of who get to the end of the race aren't allowed to tell people about what the race was like for them. I'm not sure that's helpful either.
Of course, there's a certain irony in that it's the least privileged ones who are most invested in the race, while the most privileged ones have seen it from the inside and want to tear it all down. (Not to mention that, in my case, the race has not exactly been without its costs, i.e., ten years of postdoc and other precarious positions before getting my tenure-track job.)
What if we started a prestigious ML conference with the rule that any given individual could author at most one paper every other year? What if universities or professional societies set publication limits on their professors?
Don't get me wrong, there are definitely mechanisms that would change things. Demanding higher standards of reproducibility, longer review periods, etc. But it was funny to hear Meehl's proposals from 1989, know that many of them did in fact come to pass, and see that we're in worse shape 35 years later. I worry that rule changes to slow the deluge would still be ineffective. Would it not be more productive to focus on rule changes to improve the average quality of the near-infinite paper stream?
The easiest thing to implement for improving ML publication race is to get rid of the conference mania and convince folks to go full-on journals--like how the rest of science handles it.
This doesn't solve ALL problems, but the joke that is conference submission+review will at least be solved.
Your experience gives me hope, Maxim. I am on my 11th year as a post doc :)I find industry engagement to be honestly an experience on par with teaching engaged students.
As Ron Swanson put it, instead of doing several jobs at varying degrees of half-assedness, do less stuff in a fully-assed way. It's more satisfying.
Thank you for this blog post series! I think it's very illuminating and we could use a lot more examination of issues like these. After reading the post and the comments, I wanted to chime in: I think you are radically underestimating incentives that push people to publish too much.
We have a lot of mechanisms in academia that tend to pile fame, prizes, and money on people in a winner-takes-all way. Being at the receiving end up this has a lot of benefits, not just the money. You also get to work with more people and attract more talented students. And, even controlling for talent and achievement, your students will do better later on, as your fame will help them get jobs. These effects are huge, IMO. In contrast, looking unproductive might mean you don't get funding, and end up sad and lonely with no people or resources to do your research with.
How can one gain access to all these benefits? You need your work to get a lot of exposure and hype. This can happen through publishing a lot or being at a 'top' school. Or it could be from just doing great work, but that's much harder to do. And it might still be that no one notices, or that people notice but don't give you credit, especially if you're not at a famous school or perceived to be powerful in some other way.
Professors and students at famous schools (like UC Berkeley is for ML/CS) are obviously often very talented. But, the rewards they can reap from that talent are disproportionately big compared to a grad student doing similar work at CUNY or even a place like Northwestern. And, it's too easy to game the system: You can get famous from hype without a lot of substance.
I think that changing things would have to involve somehow either getting better at assigning credit to good work and not to 'hype'. Or, it might involve making the benefits of being at the 'top' smaller. Then intrinsic motivation would have to be the reason people keep doing science.
I'm not sure how to shift the balance more in favor of better research over hype. You have some ideas on this blog that might be helpful. I am inclined to think that more focus on exposition and reproducibility can help.
I also don't mean to say that 'hype' is all bad. We need tools for attracting bright young people into science (e.g. instead of finance or whatever). We need ways to convince politicians and voters to fund our work. And being realistic about the impact of any particular piece of research can go against this because people want great stories about smart people and exciting discoveries. They don't want a boring slog where little increments done by people slowly add up, even if that might be an equally big, or even bigger, part of how science has contributed to our knowledge and understanding.
What do rewards in academia look like? Well, even between different 'famous' people, there can be big discrepancies. For example, you make 2.3 times what Branko Milanovic makes. I don't mean to imply that you deserve less or he deserves more, I'm just trying to highlight salary gaps between two academics working in similarly expensive metropolitan areas.
On the technical side, I'm a big fan of your research, please don't read this as me disparaging your work.
I am somewhat convinced after watching the brief portion that the N adj is a shorthand of older personality psych nomenclature. It is probably short for negative adjustment, which doesn't really mean much to me, but a modern translation might be something more like negative affectivity... https://en.wikipedia.org/wiki/Negative_affectivity
I am not a psychologist, but am married to one, so take this all for what it is worth. A pertinent paraphrased quote from my significant other that probably applies well to negative adjustment: All personality disorders are just more extreme versions of normal human traits. I have not kept up with your series, but did read the first installment or two and asked my partner about Paul Meehl and some context with regard to the history of psychology.
Do you think registered reports would reduce the pressure to publish, reduce stress, and improve researchers' mental health? Would you agree the pressure to publish is mostly a pressure to publish positive data? Since registered reports enable in-principle acceptance of studies regardless of whether a hypothesis is supported, p<0.05, whether results are "novel" or have "impact", severing the connection between the character of results and publication decisions would 1) increase publication of null results, 2) reduce publication of false positive results, and 3) make researchers' lives easier, in my view. I would bet that (1) and (2) would overall increase replicability, even if you think the replication crisis is overblown. Really enjoying your posts, Ben!
I'm not sure, to be honest. I used to like this idea, but he more I've been grappling with Meehl, the more I worry it would only enourage the hyperproductive people to propose a bunch of mindless experimental plans. This is why I've become a die-hard stickler for reproducibility, but tend to think that beyond that, as Feyerabend says, "anything goes."
But wouldn't mindless experimental plans be revised to be less mindless during Stage 1 review (i.e., before a study is given in-principle acceptance and before data collection)?
This is a very interesting discussion. My knee-jerk reaction is to agree that "less is better". But do we know that it is true? Maybe lots and lots of tiny noisy advances are preferable to few major results as far as the progress of science is concerned. After all that is what we see in modern ML, where models with lots of meaningless parameters outperform classical models with a few interpretable parameters. It seems that the incredible progress of the last few years has been of that kind.
Lots of tiny noisy advances are fine, as long as we disabuse ourselves of the pernicious idea that every such advance should automatically be entered (to quote Meehl again) “into the tablets of jade in the dean’s office.” If you’re invested in science, stop chasing all the prestige markers.
Social standing is important and people will chase it, whether the advances are large, small or non-existent. Science may well advance through such imperfect means.
Frankly it seems less like collective psychosis or neuroticism to me than like simple market pressure. Papers are a commodity and so competition leads to a glut of that commodity.
IHMO, economists (and people who like economics) don't spend enough time examining the difference between collective psychosis/neuroticism and market pressure.
I don't even like Econ or have more Econ training than a gen-ed course. I just remember what it was like for cohorts graduating college 2010-2013ish and trying to find a first job.
I have the rare privilege of working in a place where my research efforts are encouraged, but where I the same time I do not feel the pressure of "publish or perish".
It's been great. I review a lot of papers and try to be a nice and helpful reviewer, since I actually enjoy doing it. If I have a really exciting research idea, I'll pursue it and try to get it published, of course. But lately it feels like the greatest contribution I could make to science with my limited time and brainpower is to figure out how to create Python libraries that enable cognitive psychologists to write more reliable and transparent code. It's meaningful work, even if it seems to be more about engineering than research. But trying to develop code that is meant to be user-friendly and robust is slow work. If I had to worry about how many publications this effort would immediately yield, I would probably have to narrow the scope substantially.
With recent events in US Politics, I am a bit uncertain about whether I will remain employed in my current institution. (Not saying what it is because we have policies that limit what I can say if I do say where I work.)
You say that your blog posts have a comparable value to your articles. I have to admit, I haven't read too many of your articles, though I now see a few that might be useful to me. I can only attest that your blog has been an invigorating source of fresh thinking on important but under-discussed issues in science, particularly in social science and the application of machine learning to science.
I remembered in CS281A you gave a lecture on how machine learning thrives off of competition. In fact, I remember you arguing that having a way of benchmarking how good our model is as opposed to someone else's model through curated dataset is what makes the wheel in machine learning spin. Quantity/Quality/Whatever in papers is our measurement for success in academia. It seems to be the benchmark that keeps the competitive spirit in all the academics alive, including even the most successful academics with Turing Awards/Noble Laureates/etc...
I don't get what psychoanalysis contributes to explaining the overproduction of "research". The only real reason for it is in this sentence you wrote: "We fund an exponential expansion of the academy after World War II." You're going to get overproduction when this happens no matter how researchers feel about it. When the government subsidizes corn, you're going to get overproduction of corn no matter how corn farmers feel about it (btw, most of them would feel pretty good about it actually whatever they may tell their therapists). We don't attribute corn overproduction to "overachieving farmers". The only real way to deal with the overproduction is to end the subsidies: in the case of academia to drastically slash the budgets of NIH (FY 2024 budget: $47B), NSF (FY 2024 budget: $11B), end the federal subsidies to higher ed., etc.
Drastically slashing an already underfunded NIH and NSF would be one of the worst moves the US government can do in almost all facet of politics (from domestic to foreign).
Really appreciate the candor. Need more of this. It boggles my mind that so many academics who have "made it" seem psychologically unable to do anything but continue to seek the high score in the video game that is google scholar.
My suspicion is that there is a cultural element to this, that this is a distinctively American pathology (with the caveat that the world is becoming Americanized). European journalists sometimes call me because of my book about how US politicians are so old, and while I have structural/institutional explanations, they're looking for *psychological* explanations:
Why do they keep running? Why don't they just retire and enjoy the perks of a good life?
This question is basically nonsense from the perspective of mainstream, meritocracy-pilled America.
Wholeheartedly agree. I have a methods follow-up question. How do you, as a political scientist, separate structural/institutional from sociopsychological explanation? Are these separable?
Also, making a note that I need to read your book.
https://cup.columbia.edu/book/generation-gap/9780231553810
Heh I don't really think these are separable in any sense that approaches scientific. The institutional factors, we hope, are something that we can learn about and change.
But, in this case, no one designed these academic institutions, and no one can really change them. So to speculate, it seems like the cultural->psychological factors in the US then caused the situation with academic institutions we're in now, with the hyper competitiveness, which only reinforces the psychological experience you describe so well in this post
This sounds very much like Jenny Odell's arguments in her book "How to do Nothing", when she asks, "what's it all for?"
In any case, I've been enjoying your blogging and have never read one of your papers. I think this is a valuable use of your time.
This series of blogs on lecture 8 is truly awesome, I hope lecture 8 never ends!
Hah, I was just thinking, "Shit, I'm never going to get out of Lecture 8."
Yes, to all of this. I have been saying more or less the same things on twitter dot com for quite some time, and have been on the receiving end of the performative accusations of "speaking from the position of privilege." And the pandemic lockdown also brought these points home for me in a big way.
I empathize with those who argue for privilege checks, but it's a bit weird that those of who get to the end of the race aren't allowed to tell people about what the race was like for them. I'm not sure that's helpful either.
Of course, there's a certain irony in that it's the least privileged ones who are most invested in the race, while the most privileged ones have seen it from the inside and want to tear it all down. (Not to mention that, in my case, the race has not exactly been without its costs, i.e., ten years of postdoc and other precarious positions before getting my tenure-track job.)
What if we started a prestigious ML conference with the rule that any given individual could author at most one paper every other year? What if universities or professional societies set publication limits on their professors?
Don't get me wrong, there are definitely mechanisms that would change things. Demanding higher standards of reproducibility, longer review periods, etc. But it was funny to hear Meehl's proposals from 1989, know that many of them did in fact come to pass, and see that we're in worse shape 35 years later. I worry that rule changes to slow the deluge would still be ineffective. Would it not be more productive to focus on rule changes to improve the average quality of the near-infinite paper stream?
The easiest thing to implement for improving ML publication race is to get rid of the conference mania and convince folks to go full-on journals--like how the rest of science handles it.
This doesn't solve ALL problems, but the joke that is conference submission+review will at least be solved.
Your experience gives me hope, Maxim. I am on my 11th year as a post doc :)I find industry engagement to be honestly an experience on par with teaching engaged students.
As Ron Swanson put it, instead of doing several jobs at varying degrees of half-assedness, do less stuff in a fully-assed way. It's more satisfying.
Hi Ben,
Thank you for this blog post series! I think it's very illuminating and we could use a lot more examination of issues like these. After reading the post and the comments, I wanted to chime in: I think you are radically underestimating incentives that push people to publish too much.
We have a lot of mechanisms in academia that tend to pile fame, prizes, and money on people in a winner-takes-all way. Being at the receiving end up this has a lot of benefits, not just the money. You also get to work with more people and attract more talented students. And, even controlling for talent and achievement, your students will do better later on, as your fame will help them get jobs. These effects are huge, IMO. In contrast, looking unproductive might mean you don't get funding, and end up sad and lonely with no people or resources to do your research with.
How can one gain access to all these benefits? You need your work to get a lot of exposure and hype. This can happen through publishing a lot or being at a 'top' school. Or it could be from just doing great work, but that's much harder to do. And it might still be that no one notices, or that people notice but don't give you credit, especially if you're not at a famous school or perceived to be powerful in some other way.
Professors and students at famous schools (like UC Berkeley is for ML/CS) are obviously often very talented. But, the rewards they can reap from that talent are disproportionately big compared to a grad student doing similar work at CUNY or even a place like Northwestern. And, it's too easy to game the system: You can get famous from hype without a lot of substance.
I think that changing things would have to involve somehow either getting better at assigning credit to good work and not to 'hype'. Or, it might involve making the benefits of being at the 'top' smaller. Then intrinsic motivation would have to be the reason people keep doing science.
I'm not sure how to shift the balance more in favor of better research over hype. You have some ideas on this blog that might be helpful. I am inclined to think that more focus on exposition and reproducibility can help.
I also don't mean to say that 'hype' is all bad. We need tools for attracting bright young people into science (e.g. instead of finance or whatever). We need ways to convince politicians and voters to fund our work. And being realistic about the impact of any particular piece of research can go against this because people want great stories about smart people and exciting discoveries. They don't want a boring slog where little increments done by people slowly add up, even if that might be an equally big, or even bigger, part of how science has contributed to our knowledge and understanding.
What do rewards in academia look like? Well, even between different 'famous' people, there can be big discrepancies. For example, you make 2.3 times what Branko Milanovic makes. I don't mean to imply that you deserve less or he deserves more, I'm just trying to highlight salary gaps between two academics working in similarly expensive metropolitan areas.
On the technical side, I'm a big fan of your research, please don't read this as me disparaging your work.
I am somewhat convinced after watching the brief portion that the N adj is a shorthand of older personality psych nomenclature. It is probably short for negative adjustment, which doesn't really mean much to me, but a modern translation might be something more like negative affectivity... https://en.wikipedia.org/wiki/Negative_affectivity
There are also adjustment disorders as well https://www.mayoclinic.org/diseases-conditions/adjustment-disorders/symptoms-causes/syc-20355224
I am not a psychologist, but am married to one, so take this all for what it is worth. A pertinent paraphrased quote from my significant other that probably applies well to negative adjustment: All personality disorders are just more extreme versions of normal human traits. I have not kept up with your series, but did read the first installment or two and asked my partner about Paul Meehl and some context with regard to the history of psychology.
I think he was saying "high N-Ach types" https://en.wikipedia.org/wiki/Need_for_achievement
Oh, that must be it! CW, want to run this by your partner?
Yes. Maxim is correct. She said she had never heard anyone abbreviate it like that before, but that is it.
I appreciate the correct answer and would like your comment, but my obsolete not updatable browser no longer allows me to like comments.
Whoah Ben, deep thoughts on this one. I do wholeheartedly agree that less is more.
Do you think registered reports would reduce the pressure to publish, reduce stress, and improve researchers' mental health? Would you agree the pressure to publish is mostly a pressure to publish positive data? Since registered reports enable in-principle acceptance of studies regardless of whether a hypothesis is supported, p<0.05, whether results are "novel" or have "impact", severing the connection between the character of results and publication decisions would 1) increase publication of null results, 2) reduce publication of false positive results, and 3) make researchers' lives easier, in my view. I would bet that (1) and (2) would overall increase replicability, even if you think the replication crisis is overblown. Really enjoying your posts, Ben!
I'm not sure, to be honest. I used to like this idea, but he more I've been grappling with Meehl, the more I worry it would only enourage the hyperproductive people to propose a bunch of mindless experimental plans. This is why I've become a die-hard stickler for reproducibility, but tend to think that beyond that, as Feyerabend says, "anything goes."
But wouldn't mindless experimental plans be revised to be less mindless during Stage 1 review (i.e., before a study is given in-principle acceptance and before data collection)?
This is a very interesting discussion. My knee-jerk reaction is to agree that "less is better". But do we know that it is true? Maybe lots and lots of tiny noisy advances are preferable to few major results as far as the progress of science is concerned. After all that is what we see in modern ML, where models with lots of meaningless parameters outperform classical models with a few interpretable parameters. It seems that the incredible progress of the last few years has been of that kind.
Lots of tiny noisy advances are fine, as long as we disabuse ourselves of the pernicious idea that every such advance should automatically be entered (to quote Meehl again) “into the tablets of jade in the dean’s office.” If you’re invested in science, stop chasing all the prestige markers.
Social standing is important and people will chase it, whether the advances are large, small or non-existent. Science may well advance through such imperfect means.
https://theanarchistlibrary.org/library/lao-tzu-tao-te-ching#toc10
Frankly it seems less like collective psychosis or neuroticism to me than like simple market pressure. Papers are a commodity and so competition leads to a glut of that commodity.
IHMO, economists (and people who like economics) don't spend enough time examining the difference between collective psychosis/neuroticism and market pressure.
LOL.
I don't even like Econ or have more Econ training than a gen-ed course. I just remember what it was like for cohorts graduating college 2010-2013ish and trying to find a first job.
Professor Recht,
I have the rare privilege of working in a place where my research efforts are encouraged, but where I the same time I do not feel the pressure of "publish or perish".
It's been great. I review a lot of papers and try to be a nice and helpful reviewer, since I actually enjoy doing it. If I have a really exciting research idea, I'll pursue it and try to get it published, of course. But lately it feels like the greatest contribution I could make to science with my limited time and brainpower is to figure out how to create Python libraries that enable cognitive psychologists to write more reliable and transparent code. It's meaningful work, even if it seems to be more about engineering than research. But trying to develop code that is meant to be user-friendly and robust is slow work. If I had to worry about how many publications this effort would immediately yield, I would probably have to narrow the scope substantially.
With recent events in US Politics, I am a bit uncertain about whether I will remain employed in my current institution. (Not saying what it is because we have policies that limit what I can say if I do say where I work.)
You say that your blog posts have a comparable value to your articles. I have to admit, I haven't read too many of your articles, though I now see a few that might be useful to me. I can only attest that your blog has been an invigorating source of fresh thinking on important but under-discussed issues in science, particularly in social science and the application of machine learning to science.
sounds like the plan is to *first* become full professor at a top-x department, *then* pull back and write books? :^)
lol, it looks a bit like you're mostly engaging with the people that agree with you :s
"truly awesome" "really appreciate the candor" "I've been enjoying your blogging" "Yes, to all of this." "Whoah Ben, deep thoughts on this one."
I remembered in CS281A you gave a lecture on how machine learning thrives off of competition. In fact, I remember you arguing that having a way of benchmarking how good our model is as opposed to someone else's model through curated dataset is what makes the wheel in machine learning spin. Quantity/Quality/Whatever in papers is our measurement for success in academia. It seems to be the benchmark that keeps the competitive spirit in all the academics alive, including even the most successful academics with Turing Awards/Noble Laureates/etc...
I don't get what psychoanalysis contributes to explaining the overproduction of "research". The only real reason for it is in this sentence you wrote: "We fund an exponential expansion of the academy after World War II." You're going to get overproduction when this happens no matter how researchers feel about it. When the government subsidizes corn, you're going to get overproduction of corn no matter how corn farmers feel about it (btw, most of them would feel pretty good about it actually whatever they may tell their therapists). We don't attribute corn overproduction to "overachieving farmers". The only real way to deal with the overproduction is to end the subsidies: in the case of academia to drastically slash the budgets of NIH (FY 2024 budget: $47B), NSF (FY 2024 budget: $11B), end the federal subsidies to higher ed., etc.
Drastically slashing an already underfunded NIH and NSF would be one of the worst moves the US government can do in almost all facet of politics (from domestic to foreign).