"But just as we can clearly see the folly of Maupertuis, Leibniz, and Planck in their attempts to capture the workings of the entire universe in a tidy variational principle, we have to be on guard against technologists’ hubris in all its manifestations, from technocratic governments to Silicon Valley oligarchy. It is certainly tempting to cast about for technological solutions to all of the challenges and dilemmas facing us because “there is a tide in the affairs of men.” But working toward human ideals and aspirations such as justice, happiness, and peace is not a matter of tweaking an objective function here, adding a constraint there, collecting more data, and churning through gargantuan computations using the latest advances in machine learning. It is not just another domain for applying game theory or social choice theory. It is the arena of conflict of values, goals, and worldviews, in which we, as citizens in a democracy, have to use political means to fight for what we think is the right thing to do."
Can we learn anything by comparing the academic funding model to the venture capital system? VCs are investing their own money, and yet they make decisions on even less data than is contained in the typical grant proposal. They seem to share other weaknesses (herding, hype) with the grant funding system. Their success rate is low; I wonder if it is similar to the success rate of NSF grants (obviously the definition of success is not the same)? Maybe the rate of bad papers is not so different from the rate of failed startups?
There is also the difficulty that many important advances were side effects of projects funded for other reasons. In computing, my understanding is that the Mosaic web browser was supported by the NCSA grant, but was not part of the grant proposal. I'm not sure about Berners-Lee's work at CERN.
I do think government funded proposals should be made public and should be subject to more careful lessons-learned analysis. How to do this without further penalizing creative, high-risk projects is unclear to me. Failed startups seem to be regarded positively by VCs (presumably because lessons are learned). Can we have a similar (or more effective) process for government-funded science?
To establish the proper level of funding would seem to require some form of return-on-investment analysis that compares ROI of research to the ROI of other uses of taxpayer funds. This is obviously riddled with potential hazards!
In debating the policies of this administration, it's so hard to separate the bad faith from the good. I agree with you that VCs are loath to carefully evaluate their own success rates. It would not be in their interest to be honest about it.
I also don't want academia to take any cues from startup culture. We should not be "moving fast and breaking things" nor should we be "faking it until we make it." It's bad enough that my department and university condones running research labs as startup incubators. This has been toxic for the social and research culture of our department.
Vannevar Bush's Faustian bargain with the government is finally coming home to roost. We now have to be careful in our thinking about how academic science moves forward, having found the endless frontier actually does have an end.
I raise the VC comparison as an argument against the complaints that most academic research doesn't lead to breakthroughs. Most R&D doesn't lead to breakthroughs. At least academic research produces trained researchers. I agree that the startup mindset you describe is not good for advancing science.
I do think there are many directions for improvement in US science funding. I like the Canadian system of sending funding directly to students. They then vote with their feet to join the research groups that they find most interesting. This acts to disempower the old guard and empower the next generation.
I also like aspects of the DARPA model where a program manager formulates a fairly concise research question and then funds 5-10 teams to work on that question. Various combinations of collaboration and competition within these teams can lead to rapid progress. In addition, funding ends in 3-5 years. This avoids the problem of a line of research getting ongoing funding for decades despite having nothing to show for it.
That said, the biggest weakness with the DARPA model (and the 3-year individual NSF grant) is that it does not fund work that requires 10-20 years to reach fruition. Jason Crawford's essay (linked to by I Lang) does not address that either. Canada's CIFAR approach and some of the NSF long term research efforts are able to do this.
Crawford has no concept of how strong the competition already is for grant funding. NSF does a pretty good job of funding a wide portfolio of ideas rather than a single direction or subcommunity. One key to evolution is to have constant turnover in the people making funding decisions. NSF and DARPA rely primarily on "rotators" (academics and industry folks who spend 2-4 years on temporary assignment in funding agencies).
I'm intimidated to be cited like that! I don't have a deep insight to share, except maybe my naïve take that peer-review, like democracy, is a terrible system -- but it seems much better than any of the alternatives.
One would have thought that after the horrors of the 20th century, the idea that you could directly and objectively derive your own ideology and policy from "the science", lab2table style, would be completely abandoned. Apparently the temptation in (pretending to?) doing so is too big.
I've found the Collins/Evans "studies of expertise and experience" framework to be helpful -- it certainly doesn't provide a guidebook for exactly when we should be deferring to scientists or not, and I think that's a point in their favor.
But this framework also makes it clear, I think, how catastrophic the COVID case was. Everyone was affected by it, and everyone had direct experience and thus possible expertise about it. This means that we should be democratizing decision-making and not deferring to experts. But also everyone was affected by everyone else's actions -- a decentralized response seems like it would've been chaotic and far worse than we got.
Add in the fact that no one has figured out how to science communication in a context with 1) ubiquitous internet and more perniciously twitter use 2) a highly educated, statistcally pseudo-literate population, and we're cooked
"a highly educated, statistically pseudo-literate population" - you mean academia? You know my schtick. I'm barely half-joking. The statistical data torture is coming from inside the house.
Snark aside, I totally agree with this, and it's related to your recent post on what video games cause. Paraphrasing my student Paula Gradu: Academics get defensive and huffy and yell, "Don't confuse your Google search with my PhD." But everyone else rightfully yells back, "Don't confuse the 5 minutes you spent reading the abstract of that stupid meta-analysis with my lived experience."
In any event, when someone cites Harry Collins, I immediately print out the citation. He's one of my favorites. Though there's probably no better cure for the mental disorder of being a scientist than reading Collins...
> Zeus described the dialectic between acquiring expertise and claiming expertise. One can be a scholar who devotes their life to acquiring expertise in a subject without putting themselves forward as an expert. Research is curiosity-driven, but when you declare yourself an expert, you abandon that curiosity. You abandon the embrace of uncertainty. You declare the knowledge you have sufficient to make decisions.
... and this is insulting, demeaning, and regrettably common - even among faculty.
I vote for a series of posts about this. We need more of it in the age of AI Influencers.
Yes, I agree we should have expected the vaccine to work about as well as it does, as you say. My point was that even though Australia is much like the US in many ways (perhaps a decade behind in some ways) Covid didn't produce anything like a crisis of trust in science.
There were some very strange outcomes; e.g., many Aussies liked state and regional border closures far more than I would have predicted (keeping out people from poxy Sydney and Melbourne aligned with a resentment toward those wealthier cities that I didn't know existed), and governments that implemented those closures were voted back with very large majorities. But the health officials or "science" in general weren't blamed or given credit for this decisions, as far as I can tell.
I've been in Australia for the past 15 yrs and our experience of Covid was very different; while it had some very strange political effects, it hardly led to a crisis of faith in science. The general sense was that our politicians were muddling their way through, getting some things right and some things wrong as usual. The fact that vaccines don't provide sterilising immunity is like other projects that over-promised and under-delivered.
A friend of mine asked why software engineering produces products that are often buggy, while "real" engineering (like bridges) are far more reliable. I think this is a choice: we could make our software far more reliable, but at a cost of not "moving fast and breaking things". There's nothing inherent about aeronautical engineering that makes it reliable, as we see with Boeing.
I do think it's interesting to ask why there's a replication crisis in some disciplines but not others. When I'm asked if NLP and ML has a replication crisis I say it doesn't, but because I don't believe any result or approach until I've seen other people successfully apply it to their problems. While I am generally supportive of sharing data and code, I suspect this alone is not sufficient to ensure reliable scientific results (as you've been saying).
I think it was in your blog that I saw the comment that some disciplines involve sharing, adapting and re-using technology far more than other disciplines. While this sounds generally right, I think it's not the whole story. For example, psychology does reuse technology such as eye-tracking, priming, etc., across domains. The difference with NLP and ML is that the psychological claims (e.g., about hidden preferences or whatever) aren't about the technology, whereas in NLP and ML they are often directly about the technology or very close to it (e.g., LLMs can recognise certain kinds of named entities).
Just regarding the vaccine (although the rest of your comment makes sense and this may not detract from it), as far as I understand it, the efficacy of vaccine regards to "sterilization" may not just be a function of the vaccine used but the virus itself. For instance, even the flu vaccine doesn't confer sterilizing immunity esp for fast mutating viruses where the vaccine coverage is low among most demographics. I think the "over-promised" and "under-delivering" was more due to the general public's lack of knowledge of a VERY complicated subject matter: immunology. Which is basically the study of an extremely complex system with other complex systems (immune system vs. pathogens).
Yes, I agree we should have expected the vaccine to work about as well as it does, as you say. My point was that even though Australia is much like the US in many ways (perhaps a decade behind in some ways) Covid didn't produce anything like a crisis of trust in science.
There were some very strange outcomes; e.g., many Aussies liked state and regional border closures far more than I would have predicted (keeping out people from poxy Sydney and Melbourne aligned with a resentment toward those wealthier cities that I didn't know existed), and governments that implemented those closures were voted back with very large majorities. But the health officials or "science" in general weren't blamed or given credit for this decisions, as far as I can tell.
oh totally agreed: as I said my pedantic comment shouldn't detract from your point! It's sad to see that as a Canadian, I see my home-country take the same general direction as the US with trust in public health and science in general take a hit ... we even have huge measles outbreaks now. But we don't see the kind of backlash that the US is seeing. But the USA has always been an "interesting" place.
100%
Here's what I wrote in the final paragraph of my Mathematical Intelligencer essay about Aristotelianism and optimal control (https://link.springer.com/article/10.1007/s00283-024-10391-w):
"But just as we can clearly see the folly of Maupertuis, Leibniz, and Planck in their attempts to capture the workings of the entire universe in a tidy variational principle, we have to be on guard against technologists’ hubris in all its manifestations, from technocratic governments to Silicon Valley oligarchy. It is certainly tempting to cast about for technological solutions to all of the challenges and dilemmas facing us because “there is a tide in the affairs of men.” But working toward human ideals and aspirations such as justice, happiness, and peace is not a matter of tweaking an objective function here, adding a constraint there, collecting more data, and churning through gargantuan computations using the latest advances in machine learning. It is not just another domain for applying game theory or social choice theory. It is the arena of conflict of values, goals, and worldviews, in which we, as citizens in a democracy, have to use political means to fight for what we think is the right thing to do."
Can we learn anything by comparing the academic funding model to the venture capital system? VCs are investing their own money, and yet they make decisions on even less data than is contained in the typical grant proposal. They seem to share other weaknesses (herding, hype) with the grant funding system. Their success rate is low; I wonder if it is similar to the success rate of NSF grants (obviously the definition of success is not the same)? Maybe the rate of bad papers is not so different from the rate of failed startups?
There is also the difficulty that many important advances were side effects of projects funded for other reasons. In computing, my understanding is that the Mosaic web browser was supported by the NCSA grant, but was not part of the grant proposal. I'm not sure about Berners-Lee's work at CERN.
I do think government funded proposals should be made public and should be subject to more careful lessons-learned analysis. How to do this without further penalizing creative, high-risk projects is unclear to me. Failed startups seem to be regarded positively by VCs (presumably because lessons are learned). Can we have a similar (or more effective) process for government-funded science?
To establish the proper level of funding would seem to require some form of return-on-investment analysis that compares ROI of research to the ROI of other uses of taxpayer funds. This is obviously riddled with potential hazards!
In debating the policies of this administration, it's so hard to separate the bad faith from the good. I agree with you that VCs are loath to carefully evaluate their own success rates. It would not be in their interest to be honest about it.
I also don't want academia to take any cues from startup culture. We should not be "moving fast and breaking things" nor should we be "faking it until we make it." It's bad enough that my department and university condones running research labs as startup incubators. This has been toxic for the social and research culture of our department.
Vannevar Bush's Faustian bargain with the government is finally coming home to roost. We now have to be careful in our thinking about how academic science moves forward, having found the endless frontier actually does have an end.
I raise the VC comparison as an argument against the complaints that most academic research doesn't lead to breakthroughs. Most R&D doesn't lead to breakthroughs. At least academic research produces trained researchers. I agree that the startup mindset you describe is not good for advancing science.
I do think there are many directions for improvement in US science funding. I like the Canadian system of sending funding directly to students. They then vote with their feet to join the research groups that they find most interesting. This acts to disempower the old guard and empower the next generation.
I also like aspects of the DARPA model where a program manager formulates a fairly concise research question and then funds 5-10 teams to work on that question. Various combinations of collaboration and competition within these teams can lead to rapid progress. In addition, funding ends in 3-5 years. This avoids the problem of a line of research getting ongoing funding for decades despite having nothing to show for it.
That said, the biggest weakness with the DARPA model (and the 3-year individual NSF grant) is that it does not fund work that requires 10-20 years to reach fruition. Jason Crawford's essay (linked to by I Lang) does not address that either. Canada's CIFAR approach and some of the NSF long term research efforts are able to do this.
There’s an interesting post here comparing research funders and VCs and discussing what would need to happen for the former to become more like the latter: https://blog.rootsofprogress.org/accelerating-science-through-evolvable-institutions.
Crawford has no concept of how strong the competition already is for grant funding. NSF does a pretty good job of funding a wide portfolio of ideas rather than a single direction or subcommunity. One key to evolution is to have constant turnover in the people making funding decisions. NSF and DARPA rely primarily on "rotators" (academics and industry folks who spend 2-4 years on temporary assignment in funding agencies).
Sorry but what is the point of NSF or even scientific funding we will have AGI soon?
I think only one of the following is true:
1. We will have AGI soon and it will only give incremental benefits over what we have now
2. We will never have AGI
I'm intimidated to be cited like that! I don't have a deep insight to share, except maybe my naïve take that peer-review, like democracy, is a terrible system -- but it seems much better than any of the alternatives.
One would have thought that after the horrors of the 20th century, the idea that you could directly and objectively derive your own ideology and policy from "the science", lab2table style, would be completely abandoned. Apparently the temptation in (pretending to?) doing so is too big.
The current situation is really depressing.
I've found the Collins/Evans "studies of expertise and experience" framework to be helpful -- it certainly doesn't provide a guidebook for exactly when we should be deferring to scientists or not, and I think that's a point in their favor.
But this framework also makes it clear, I think, how catastrophic the COVID case was. Everyone was affected by it, and everyone had direct experience and thus possible expertise about it. This means that we should be democratizing decision-making and not deferring to experts. But also everyone was affected by everyone else's actions -- a decentralized response seems like it would've been chaotic and far worse than we got.
Add in the fact that no one has figured out how to science communication in a context with 1) ubiquitous internet and more perniciously twitter use 2) a highly educated, statistcally pseudo-literate population, and we're cooked
"a highly educated, statistically pseudo-literate population" - you mean academia? You know my schtick. I'm barely half-joking. The statistical data torture is coming from inside the house.
Snark aside, I totally agree with this, and it's related to your recent post on what video games cause. Paraphrasing my student Paula Gradu: Academics get defensive and huffy and yell, "Don't confuse your Google search with my PhD." But everyone else rightfully yells back, "Don't confuse the 5 minutes you spent reading the abstract of that stupid meta-analysis with my lived experience."
In any event, when someone cites Harry Collins, I immediately print out the citation. He's one of my favorites. Though there's probably no better cure for the mental disorder of being a scientist than reading Collins...
> Zeus described the dialectic between acquiring expertise and claiming expertise. One can be a scholar who devotes their life to acquiring expertise in a subject without putting themselves forward as an expert. Research is curiosity-driven, but when you declare yourself an expert, you abandon that curiosity. You abandon the embrace of uncertainty. You declare the knowledge you have sufficient to make decisions.
... and this is insulting, demeaning, and regrettably common - even among faculty.
I vote for a series of posts about this. We need more of it in the age of AI Influencers.
Challenge (conditionally) accepted. I'll kick off next week with something about this.
Yes, I agree we should have expected the vaccine to work about as well as it does, as you say. My point was that even though Australia is much like the US in many ways (perhaps a decade behind in some ways) Covid didn't produce anything like a crisis of trust in science.
There were some very strange outcomes; e.g., many Aussies liked state and regional border closures far more than I would have predicted (keeping out people from poxy Sydney and Melbourne aligned with a resentment toward those wealthier cities that I didn't know existed), and governments that implemented those closures were voted back with very large majorities. But the health officials or "science" in general weren't blamed or given credit for this decisions, as far as I can tell.
I've been in Australia for the past 15 yrs and our experience of Covid was very different; while it had some very strange political effects, it hardly led to a crisis of faith in science. The general sense was that our politicians were muddling their way through, getting some things right and some things wrong as usual. The fact that vaccines don't provide sterilising immunity is like other projects that over-promised and under-delivered.
A friend of mine asked why software engineering produces products that are often buggy, while "real" engineering (like bridges) are far more reliable. I think this is a choice: we could make our software far more reliable, but at a cost of not "moving fast and breaking things". There's nothing inherent about aeronautical engineering that makes it reliable, as we see with Boeing.
I do think it's interesting to ask why there's a replication crisis in some disciplines but not others. When I'm asked if NLP and ML has a replication crisis I say it doesn't, but because I don't believe any result or approach until I've seen other people successfully apply it to their problems. While I am generally supportive of sharing data and code, I suspect this alone is not sufficient to ensure reliable scientific results (as you've been saying).
I think it was in your blog that I saw the comment that some disciplines involve sharing, adapting and re-using technology far more than other disciplines. While this sounds generally right, I think it's not the whole story. For example, psychology does reuse technology such as eye-tracking, priming, etc., across domains. The difference with NLP and ML is that the psychological claims (e.g., about hidden preferences or whatever) aren't about the technology, whereas in NLP and ML they are often directly about the technology or very close to it (e.g., LLMs can recognise certain kinds of named entities).
Just regarding the vaccine (although the rest of your comment makes sense and this may not detract from it), as far as I understand it, the efficacy of vaccine regards to "sterilization" may not just be a function of the vaccine used but the virus itself. For instance, even the flu vaccine doesn't confer sterilizing immunity esp for fast mutating viruses where the vaccine coverage is low among most demographics. I think the "over-promised" and "under-delivering" was more due to the general public's lack of knowledge of a VERY complicated subject matter: immunology. Which is basically the study of an extremely complex system with other complex systems (immune system vs. pathogens).
Yes, I agree we should have expected the vaccine to work about as well as it does, as you say. My point was that even though Australia is much like the US in many ways (perhaps a decade behind in some ways) Covid didn't produce anything like a crisis of trust in science.
There were some very strange outcomes; e.g., many Aussies liked state and regional border closures far more than I would have predicted (keeping out people from poxy Sydney and Melbourne aligned with a resentment toward those wealthier cities that I didn't know existed), and governments that implemented those closures were voted back with very large majorities. But the health officials or "science" in general weren't blamed or given credit for this decisions, as far as I can tell.
oh totally agreed: as I said my pedantic comment shouldn't detract from your point! It's sad to see that as a Canadian, I see my home-country take the same general direction as the US with trust in public health and science in general take a hit ... we even have huge measles outbreaks now. But we don't see the kind of backlash that the US is seeing. But the USA has always been an "interesting" place.