Weather forecasting has slowly and steadily evolved from mostly snake oil to a generally positive and helpful technology. Having physics and frequent reality checks has greatly assisted in moving the field in the direction toward more effective methods.
There was also the huge impact of computers. The origins of modern weather forecasting as an early application of computing fancied by von Neumann is indeed an interesting case study.
The fact that the physics based weather forecasts are mostly done using a code written in Fortran66 (the version of Fortran released in 1966) gives some important information about what separates good and bad notmal technology. Good normal technology has diminshing returns and has an initial big wave of investment followed by a steady state with minor process improvements (the algorithms are not much changed, the computers have just gotten bigger). Bad normal technology is only profitable by sucking up ever larger amounts of capital (advertising, MLMs, and maybe LLMs fit this model).
Can "weather forecasting" be described as a technology in the same way LLMs and crypto and advertising can be? Maybe so if advertising is a reasonable example. I know the farmer's almanac is flawed, and maybe this is hindsight bias, but it's hard to think of weather forecasting as snake oil rather than just "a thing that we haven't totally figured out yet."
> Let me be clear. I don’t think AI is snake oil OR normal technology. These are ridiculous extremes that are palatable for clicks, but don’t engage with the complexity and weirdness of computing and the persistent undercurrent promising artificial intelligence.
I realized that, while I've read a bunch of your texts, I'm not sure *what* do you think AI is. Or, to be more precise: how the field will progress, which kind of impact should we expect, and so on. In your recent participation in the Increments podcast you say something like 'the doomers have a very clean narrative of what's going to happen' (or something like this, pardon me for the misquote) and I don't find something similar for your position. In fact, there seems to be very few coherent counter-narratives to the doomer's one and, in that sense, Arvind is positioning himself to be this, whether correctly or no, it remains to be seen. So, while writing a point by point rebuttal of their view could be silly, a broad overview of your assumptions and some general predictions would be very interesting.
I think a key thing that separates me from the "AI is X (X a moving target)" crowd is I don't like to commit to predictions. There are lots of possible futures, and it's up to us to create the one we want.
I can tell you what I don't want AI to be. I'm not interested in speculation about what the actual reality will be.
To be fair, you also asked me about the present! That I'm happy to engage with. Marketing has unfortunately made AI into too many things. There's ChatGPT, there's recommendation systems on the internet, there's weird robot stuff, there's stupid clinical decision rules (see, for example, https://www.argmin.net/p/healthcare-and-the-ai-dystopia), there's just boring numerical optimization (https://www.wired.com/story/ai-comes-up-with-bizarre-physics-experiments-but-they-work/). I can and do write commentary about all of these, but they are so varied that I can't take a singular stance.
Not sure if that is helpful, but I'm happy to follow up.
Thanks for the answer! I have a couple of comments.
1. I understand the tension between predicting and creating the future, I'd only add that understanding is a tool in achieving a desirable outcome.
2. Yeah, the term AI is vague and imprecise, and this problem goes way back to the field's genesis. For the sake of simplicity, I'm considering only broad ('general') AI, something that can perform human cognitive labor. Now, I know one may object here and say that even this definition is faulty and vague, and, while I agree, I'd say that it's crisp enough to maintain a conversation. So that would exclude chess playing and protein folding programs.
3. The doomer scenario has a few central assumptions: (i) that AI matching human cognitive capabilities is achievable (and probably in short term), (ii) that using this generalist AI for AI research could result in a fast improvement on capabilities, and (iii) that autonomous agents far more intelligent than us -- an outcome of this process -- would threaten our existence. Now Arvind & Sayash's framework rejects those conclusions roughly by stating that (ii) would not come to pass, as there are numerous other bottlenecks that are not solvable by algorithmic intelligence. Those obstacles are also present on society's at large in some sense. Things will take time, and we'll adapt. They also believe that 'aligning' AI agents is not a terrible hard problem and could be tackled by a combination of technical solutions with usual governance, hence 'normal' (although their terminology is rather broad, as they say 'electricity' and 'internet' are normal).
4. Now, I'm sure you understand those positions, I just laid them out because I've seen you dismiss doomers as silly, but I don't see a clear 'model' as the one forwarded by Arvind laying out the reasons for the disagreement. I was a bit surprised by this post because I thought you more or less agreed with Arvind. Do you believe human-matching generalist AI too far away to inspire concern? Or is the framing the main problem? While I get that the AI X-risk community can be rather culty, I don't think that's enough reason to entirely dismiss their concerns.
5. Last, a little note on 'present vs future concerns'. I think it's fair to focus on actual, existing problems. But in a fast moving field, the future can arrive earlier than expected. I mean, problems like people having psychosis after talking to chatbots and widespread cheating challenging higher education would be considered scifi worries just a few years ago.
Igor, thanks for this well reasoned comment - it closely matches what was in my head reading the post.
Ben- have you written about why you think the Scott Alexander style doomer view is off base? Would like to understand your view on AI related danger on a ~10-20 year horizon in more detail.
I’m not sure I have much more to say than this, but it's important. Engaging with AGI debates is like engaging in religious debates. And there are downsides to such engagement. It provides useful cover for the AI companies to exploit people.
Engaging with long-termer storytelling distracts from real harm. In a similarly harmful endeavor, Arvind is one of the organizers of an absurd superforecasting project that I critique here:
Hi, Ben. Thanks a lot for taking your time to answer! I'm familiar with the texts you linked. I still don't fully grasp your view, and that's why I wrote the first message. I also understand that you don't want to discuss in the terms proposed, so I'll stop pushing in that direction. I'll just share a personal anecdote because I also think it's important. I hope you at least read it and give it some thought.
A few years ago I was pretty unconcerned by the AI risk debate. I thought it was all pointless sci-fi and we were very far from the system described in those stories. I also did my PhD in symbolic AI and that community was very skeptical of the deep learning approach as a staircase to true reasoning, AGI, and so on. Transformer scaling changed the picture for me. I had a big mental list of 'things just scaling neural networks will not achieve', and this list grew thinner and thinner very fast. I think this was a surprise to almost everyone: how many did expect language models getting gold at the IMO in a few years? I started to take the 'sci-fi' considerations more seriously. I was (and am) pretty concerned. I found stumbled upon your blog after reading a text by Ruxandra Teslo in which she described being anxious pretty much like me, but was 'deprogramed' partially by engaging with you and Yann LeCunn. I went on to see what you had to say, honestly looking forward to be convinced, but ultimately didn't find a coherent counter-narrative. In the end, it made me take the doomer case more seriously: it seems that non-doomers simply do not have good reasons backing their position. Now, I understand that you think the position is 'just a story' and debating quasi-religious people have some costs, e.g., legitimizing them. What I want to call attention for is that there are also costs on the opposite direction. If you don't think we should worry about possible AI ruin, it would be useful to know why on a technical level. If people who think like you don't occupy this space, more and more bystanders will flock to the doomer position because those people are writing careful explanations of their reasoning, writing essays and books, and so on. They don't need the debate to be legitimized. They are already. And that position is not contained to some culty-Berkeley people. It's n=1, but I can attest that I have almost nothing in common with the people you describe. I live in another continent, am not affiliated to EA, never had a Lesswrong account, read Harry Potter fanfic, or lived in a group house. I'm sure there are more people on the same boat.
The AGI/robots turning against their masters is indeed a story as old as time. However, there are no credible reports of an actual golem, although some people swear by one being stored in the attic Old New Synagogue in Prague. ;-) It seems, however, that we might be on the cusp of getting a generalist digital intelligence both highly capable and autonomous in the next ~5 years. One can entertain both concerns at the same time: being worried about the future of the technology _and_ the present harm, as you put it. In fact, I think a lot of people in the 'safety' community already do this. If those predictions come to pass, i.e., if we really get something remotely like the AGI described in the literature, we'll have to deal with those problems sooner than later. Ignoring them seems to me like ignoring future heat waves due to climate change by arguing that we should only care for present harms. It seems that you dismiss those concerns, your Twiter bio reads 'The world won't end', so it seems to me that beyond thinking that AGI stories are too hypothetical to be discussed, you also think that this framing of the technology is very unlikely to be achievable. I hope you are correct and I am wrong. I really do. But I am not convinced and by not laying out a detailed account of the motives you lose a potential sympathetic reader (me), and give more space to people you believe to be completely wrong and even harmful.
I have the same question - I'm intrigued but this post, but having trouble understanding what your message is or what exactly you are critiquing about their stance.
What's your defense of writing back-to-back books with only a couple of years between each other with the titles "AI Snake Oil" and "AI as Normal Technology"? How do you defend renaming the newsletter?
Seems like they updated their view, as thoughtful people tend to sometimes do. I don't have a problem with that. I assume they thought the snake oil message was most helpful at the time, and now they think the normal tech view is more helpful.
The best example I can think of is electricity, which gradually transformed from a mainstay of traveling theatrical performers and "medical electricians" to a "normal technology." Depending on what means by "snake oil," a term I've always rather hated, I think this is an extremely common trajectory.
I also hate the term snake oil and already regret using it in this post. Sometimes when I strive for parallelism, I get stuck using imprecise jargon.
The thing that I'm stuck on for these examples: I want examples that start off as literal scams and then become mainstream treatments. Both MLM and crypto fall under this category. And patent medicines may also fit?
But a lot of scammers have an element of true believer in them (like supplement pushers). And a lot of things that people consider "snake oil" (sorry) like massage, chiropractic adjustment, or cupping, all "work" for professional athletes even though no one has an RCT validating their "effectiveness."
The other pattern you mentioned, where scams become "normal technologies" in a derrogatory sense by institutionalizing them, was a certain preoccupation of my advisor, who wrote a book on the polygraph: https://bookshop.org/a/111531/9780803224599
And a lot of things that people consider "snake oil" (sorry) like massage, chiropractic adjustment, or cupping, all "work" for professional athletes even though no one has an RCT validating their "effectiveness."
Not to do some critical thinking 101 with your assertion above, but this is pretty poor epistemological practice. The upshot of a lack of good evidence for those practice, including RCTs, doesn't mean we should take their use among professional athletes as good evidence that they work or that they work for professional athletes. By that reasoning, everything professional athletes do to aid their performance "works".
Professional athletes do countless things that may or may not work and may in fact be detrimental in some contexts vs others. Just look at the literature around ice-baths. Or look at all kinds of things previous Tour de France riders were doing to improve their performance, like training fasted, which was very common 10yrs or so ago and which we now know is worse than training unfasted. Maybe by "work" you mean give them a psychological edge through belief? So something like the placebo effect?
What I love about science-based epistemology is how it convinces its adherents that the practices of the most successful people on earth are akin to superstitious witchcraft.
Feyerabend might say Galileo’s telescope is another case, it looked like a gimmick at first. Astronomers dismissed the early images as distortions, not evidence, and the whole enterprise had a whiff of snake oil. But within a generation it became the baseline "normal" technology of astronomy, precisely because scientists and institutions decided to trust and refine it.
I didn't realize you felt like there was much air between you and them and am curious as to what the bullets points there might be- I've felt you were all part of a 'common sense resistance' to the whole climate. The rebranding is dumb, but both angles (and, I would have thought, yours) strike me as taking vital swings at the idea that LLMs et al. are everything their variously messianic and/or cynically-addled by investor subsidies boosters insist- a group that somehow manages to include most people worried they'll cause extinction events.
I took it as a given that 'normal' and 'snake oil' were routinely overlapping categories- we're in an age where the global pool of money is so big and so bored that essentially any marginal technology or business model can be subsidized by investors until it's so embedded that questions of utility or profitability in any old-fashioned sense are almost peripheral. So I'd answer your question with 'every app you've ever seen an ad for.' Uber comes to mind- relies on 15 years of some of the largest burn rates in history to predatory-dump its way into control of the hired car market, powered in part by nakedly dishonest promises it'd have robot cars somewhere at the halfway mark, finally turns a middling profit, and now...it's here, I guess, getting me to work when my car is busted at the hands of a driver that might still be turning a loss in some analyses.
I'd stick online sport betting in that same boat too- there's a case to be made that certain gambling prohibition regimes are too hypocritical and joyless to be worth the trouble in a harm reduction sort of way, but that got pried open to a circumstance where it's normal for all the 21 year old sports hounds in a shared house to be ten seconds away from funneling their paychecks straight into their phones in a way that I think more thoughtful are united in thinking feels like a hustle.
I hear you that we're on the same side, but it's healthy that negative polarization doesn't force us to agree. I don't know if it's worth writing a whole post about this, but I feel like I almost always disagree with Arvind.
This is too simplistic, but there's here's a subset of a long list of positions he takes that I would be on the opposite side of:
- lack of anonymity in the Netflix prize is a crisis
- bitcoin is deep and useful and good technology
- machine learning is causing replication crises
- machine learning papers should abide by research checklists
Fair- and it looks like I'm on your side of all that list (and thought Arvind was too for the bitcoin bit- I could've sworn there was a line in the normal technology paper disowning it, but I must tangling things up), with the exception of not liking the 'snake oil' frame, about which I'm genuinely curious, just because the whole atmosphere has felt to me like three card monty from the jump- the dicey logic of 'this is so dangerous we need to get it to market first', the flurry of money changing hands with an extra layer of spicy accounting to make it look like even more money is changing hands, the blitz to build integrations no one asks for before they realize they don't want them, the gulf between gamed benchmarks and worked real world utility- I've felt like PT Barnum was here the whole time. I don't want to steal your time here, but is there something about that shorthand of snake oil you feel is especially unhelpful? Is it that is implies the technology has no ability vs. a potential problematic amount, or...?
Complicated! I am sure healthcare is riddled with positive and negative examples. Yesterday, Jeff Lockhart told me "snake oil was a decent topical pain reliever brought to the US by Chinese rail workers, then other people in the US started selling ineffective knockoff products and it became synonymous with 'fraud.'"
The history of pharmaceuticals is probably exactly where I should look for positive examples. For a while it seemed like every chemical was a miracle cure, but then we had to build an FDA to sort the curatives from the poisons.
If you enjoy The Mirror Makers, you may want to check out Propaganda by Edward Bernays. It fits the theme and may be potentially more harrowing in this era.
The “normal technology” analogy I have used recently when discussing AI is cars: “normal,” ubiquitous, undeniably useful in many instances, and yet also responsible for mass death, anti-social behavior, horrible urban design, and climate ruin when compared with alternative technologies for moving lots of people around.
I go back and forth on whether the phenomenon you point to is one of technology or just how society must work. People generally want to raise their social position relative to others (also companies, governments, departments, etc). If one player in the status game finds a way to raise their relative position, others will try to copy. This in turn makes a previous status symbol into table stakes once adoption becomes widespread. This phenomenon is often mediated by technology, but not always (look at how much more education people have now than 50, 100, or 200 years ago for instance). So is it a technology problem or a society problem?
Loved this!! The “normal technology” label isn’t an observation; it’s more of a laundering function. When institutions call something “normal,” they’re signaling that its externalities have been politically priced in, not that the harms are solved.
“Snake oil → normal” is rarely about the artifact; it’s about claims, incentives, and liability. The pattern I see is a legitimacy S-curve:
1. Invention + speculation: wild, totalizing promises; zero accountability.
2. Institutionalization (high capture risk): standards appear, but are written by beneficiaries; harms are reframed as “user error.”
3. Legitimacy (scarce): claims narrow, measurement hardens, liability attaches, independent audits bite. Only here should we call it “normal”—and even then, it’s provisional.
Advertising, MLMs, and now large swaths of the crypto industry have never reached stage 3. They stabilized in stage 2 via regulatory enablement and rent-seeking. That’s the “derogatory normality” you’re naming.
If you want a rare counterexample: electrotherapy. A century ago, it was rife with quack cure-alls. Today, tightly scoped modalities (TENS for analgesia, cardiac pacing/defibrillation, and targeted neurostimulation) are evidence-based, audited, and liability-bearing. Same energy, radically narrowed claims. It “normalized” only after precision, provenance, and accountability were enforced.
Generative AI is stuck between 1 and 2. Personified agents, homework completers, and clinical scribes are spreading faster than legitimacy mechanisms. Calling that “normal” imports the ad-tech political economy by default.
What to do instead of vibes-based normalizing:
-- Narrow the claim surface. No “general intelligence” posturing; requires task-bounded, falsifiable performance specs with context of use.
-- Provenance by design. Training data lineage and model bill of materials as a precondition for procurement.
-- Assignable liability. If an AI system provides advice or drafts a clinical note, a named party assumes the downside risk by contract and law.
-- Independent audits with teeth. Third-party testing against declared claims, harm reporting registries, and recall authority for noncompliance.
-- Fiduciary duty for advice agents. If it talks like a counselor, it must act in the user’s best interest, no covert ad targeting or behavioral shaping.
-- Structural separation. Don’t let the engagement business own the model serving safety-critical domains. (We learned this the hard way with ads.)
The real question isn’t “Is AI snake oil or normal?” It’s “Who gets to define normal, on what metrics, and with what consequences when they’re wrong?” Without precision, provenance, and enforceable liability, “normal” means “profitable for incumbents.” With them, we have a shot at the rare, non-derogatory kind of normal.
I've read this post twice now and I'm still not sure I understand your differences with Kapoor and Narayanan. As I see it, there is no contradiction from moving from "snake oil" to "normal technology"; the former was about deflating overblown claims about the transformative powers of AI; and the latter is about a framework through which we can think about AI adoption and building safeguards.
I also don't quite understand the different analogies to cryptocurrency and advertising. Crypto is obviously bad if the goal is to create speculative assets; but there might be other uses of the underlying technology. And advertising, well. Advertising is just advertising; it employs people, and at the margin, it makes people buy stuff. And if the biggest use of machine learning is in advertising, that's fine by me though other people's mileages may vary.
If I may indulge in a bit of arm chair psychologizing, I wonder if this has to do with the fact that Kapoor and Narayanan are programmatizers. The way I see it, the framework of "normal technology" is a way to create a program that can unite a bunch of tech criticisms that are somewhat disparate while also keeping on board people who see some promise in AI. That seems to me to be a good thing. As I see it, you don't particularly care for programs, e.g. you don't like research checklists. Or maybe you just don't care for this one. As politics though, given that many more people would like to keep working on AI and believe in putting it to various uses, I think the "normal technology" move is a good one that will allow conversation between a range of different actors.
As for AI being used for "automatic coursework completion," I am as distraught as you that LLMs have made the paper much harder to use as an assessment tool. But I think there is a solution sitting right in front of our faces that doesn't require any particularly overt stance on the "goodness" of AI (which gets us into all sorts of distracting questions like: Is Silicon Valley good? What about capitalism? etc. that don't actually do much to solve the actual problem happening right now). Like any socio-technical solution though, it does require collective action from us. https://computingandsociety.substack.com/p/how-do-you-solve-a-problem-like-chatgpt
I'm having trouble getting past your first paragraph. It's absolutely insane to, over the course of a year, decide that snake oil is normal technology. The correct move is to make snake oil illegal, not normalize it.
Now, perhaps I'm just upset with Arvind being a brandmaster. He's always going to choose what's catchy over what's accurate. But he can be called out for his incoherent branding and persistent opportunitistic bouncing to be the smartest truth-sayer on the latest fads.
But you are also right that I think his reductive programmatization is harmful. For example, this paper is a great example of wall-to-wall misguided and terrible ideas about a crisis that doesn't exist.
This is not quite what you're asking for, but a couple of years ago I caught Covid just after I arrived to spend a few months at the University of Edinburgh. Stuck in my AirBnB I read about the evolution of the steam engine (which largely happened in that area), and was struck by the parallels with modern NLP and AI. There was the same mad combination of science and engineering and crazy experimentation we see today, as well as blatant chalatans trying to make a quick quid.
Many of the people experimenting with steam engines were extremely interested in science and looked there for inspiration, but of course the science wasn't there. It would take decades`````````` for thermodynamics and statistical mechanics to be developed.
Weather forecasting has slowly and steadily evolved from mostly snake oil to a generally positive and helpful technology. Having physics and frequent reality checks has greatly assisted in moving the field in the direction toward more effective methods.
There was also the huge impact of computers. The origins of modern weather forecasting as an early application of computing fancied by von Neumann is indeed an interesting case study.
The fact that the physics based weather forecasts are mostly done using a code written in Fortran66 (the version of Fortran released in 1966) gives some important information about what separates good and bad notmal technology. Good normal technology has diminshing returns and has an initial big wave of investment followed by a steady state with minor process improvements (the algorithms are not much changed, the computers have just gotten bigger). Bad normal technology is only profitable by sucking up ever larger amounts of capital (advertising, MLMs, and maybe LLMs fit this model).
Can "weather forecasting" be described as a technology in the same way LLMs and crypto and advertising can be? Maybe so if advertising is a reasonable example. I know the farmer's almanac is flawed, and maybe this is hindsight bias, but it's hard to think of weather forecasting as snake oil rather than just "a thing that we haven't totally figured out yet."
Hi, Ben! This part caught my attention:
> Let me be clear. I don’t think AI is snake oil OR normal technology. These are ridiculous extremes that are palatable for clicks, but don’t engage with the complexity and weirdness of computing and the persistent undercurrent promising artificial intelligence.
I realized that, while I've read a bunch of your texts, I'm not sure *what* do you think AI is. Or, to be more precise: how the field will progress, which kind of impact should we expect, and so on. In your recent participation in the Increments podcast you say something like 'the doomers have a very clean narrative of what's going to happen' (or something like this, pardon me for the misquote) and I don't find something similar for your position. In fact, there seems to be very few coherent counter-narratives to the doomer's one and, in that sense, Arvind is positioning himself to be this, whether correctly or no, it remains to be seen. So, while writing a point by point rebuttal of their view could be silly, a broad overview of your assumptions and some general predictions would be very interesting.
I think a key thing that separates me from the "AI is X (X a moving target)" crowd is I don't like to commit to predictions. There are lots of possible futures, and it's up to us to create the one we want.
I can tell you what I don't want AI to be. I'm not interested in speculation about what the actual reality will be.
To be fair, you also asked me about the present! That I'm happy to engage with. Marketing has unfortunately made AI into too many things. There's ChatGPT, there's recommendation systems on the internet, there's weird robot stuff, there's stupid clinical decision rules (see, for example, https://www.argmin.net/p/healthcare-and-the-ai-dystopia), there's just boring numerical optimization (https://www.wired.com/story/ai-comes-up-with-bizarre-physics-experiments-but-they-work/). I can and do write commentary about all of these, but they are so varied that I can't take a singular stance.
Not sure if that is helpful, but I'm happy to follow up.
Thanks for the answer! I have a couple of comments.
1. I understand the tension between predicting and creating the future, I'd only add that understanding is a tool in achieving a desirable outcome.
2. Yeah, the term AI is vague and imprecise, and this problem goes way back to the field's genesis. For the sake of simplicity, I'm considering only broad ('general') AI, something that can perform human cognitive labor. Now, I know one may object here and say that even this definition is faulty and vague, and, while I agree, I'd say that it's crisp enough to maintain a conversation. So that would exclude chess playing and protein folding programs.
3. The doomer scenario has a few central assumptions: (i) that AI matching human cognitive capabilities is achievable (and probably in short term), (ii) that using this generalist AI for AI research could result in a fast improvement on capabilities, and (iii) that autonomous agents far more intelligent than us -- an outcome of this process -- would threaten our existence. Now Arvind & Sayash's framework rejects those conclusions roughly by stating that (ii) would not come to pass, as there are numerous other bottlenecks that are not solvable by algorithmic intelligence. Those obstacles are also present on society's at large in some sense. Things will take time, and we'll adapt. They also believe that 'aligning' AI agents is not a terrible hard problem and could be tackled by a combination of technical solutions with usual governance, hence 'normal' (although their terminology is rather broad, as they say 'electricity' and 'internet' are normal).
4. Now, I'm sure you understand those positions, I just laid them out because I've seen you dismiss doomers as silly, but I don't see a clear 'model' as the one forwarded by Arvind laying out the reasons for the disagreement. I was a bit surprised by this post because I thought you more or less agreed with Arvind. Do you believe human-matching generalist AI too far away to inspire concern? Or is the framing the main problem? While I get that the AI X-risk community can be rather culty, I don't think that's enough reason to entirely dismiss their concerns.
5. Last, a little note on 'present vs future concerns'. I think it's fair to focus on actual, existing problems. But in a fast moving field, the future can arrive earlier than expected. I mean, problems like people having psychosis after talking to chatbots and widespread cheating challenging higher education would be considered scifi worries just a few years ago.
Igor, thanks for this well reasoned comment - it closely matches what was in my head reading the post.
Ben- have you written about why you think the Scott Alexander style doomer view is off base? Would like to understand your view on AI related danger on a ~10-20 year horizon in more detail.
To Kevin and Igor,
Here is one of the few things I’ve written about AI Futures.
https://www.argmin.net/p/maybe-just-believing-in-agi-makes
I’m not sure I have much more to say than this, but it's important. Engaging with AGI debates is like engaging in religious debates. And there are downsides to such engagement. It provides useful cover for the AI companies to exploit people.
https://www.argmin.net/p/the-banal-evil-of-ai-safety
Engaging with long-termer storytelling distracts from real harm. In a similarly harmful endeavor, Arvind is one of the organizers of an absurd superforecasting project that I critique here:
https://www.argmin.net/p/one-out-of-five-ai-researchers
Hi, Ben. Thanks a lot for taking your time to answer! I'm familiar with the texts you linked. I still don't fully grasp your view, and that's why I wrote the first message. I also understand that you don't want to discuss in the terms proposed, so I'll stop pushing in that direction. I'll just share a personal anecdote because I also think it's important. I hope you at least read it and give it some thought.
A few years ago I was pretty unconcerned by the AI risk debate. I thought it was all pointless sci-fi and we were very far from the system described in those stories. I also did my PhD in symbolic AI and that community was very skeptical of the deep learning approach as a staircase to true reasoning, AGI, and so on. Transformer scaling changed the picture for me. I had a big mental list of 'things just scaling neural networks will not achieve', and this list grew thinner and thinner very fast. I think this was a surprise to almost everyone: how many did expect language models getting gold at the IMO in a few years? I started to take the 'sci-fi' considerations more seriously. I was (and am) pretty concerned. I found stumbled upon your blog after reading a text by Ruxandra Teslo in which she described being anxious pretty much like me, but was 'deprogramed' partially by engaging with you and Yann LeCunn. I went on to see what you had to say, honestly looking forward to be convinced, but ultimately didn't find a coherent counter-narrative. In the end, it made me take the doomer case more seriously: it seems that non-doomers simply do not have good reasons backing their position. Now, I understand that you think the position is 'just a story' and debating quasi-religious people have some costs, e.g., legitimizing them. What I want to call attention for is that there are also costs on the opposite direction. If you don't think we should worry about possible AI ruin, it would be useful to know why on a technical level. If people who think like you don't occupy this space, more and more bystanders will flock to the doomer position because those people are writing careful explanations of their reasoning, writing essays and books, and so on. They don't need the debate to be legitimized. They are already. And that position is not contained to some culty-Berkeley people. It's n=1, but I can attest that I have almost nothing in common with the people you describe. I live in another continent, am not affiliated to EA, never had a Lesswrong account, read Harry Potter fanfic, or lived in a group house. I'm sure there are more people on the same boat.
The AGI/robots turning against their masters is indeed a story as old as time. However, there are no credible reports of an actual golem, although some people swear by one being stored in the attic Old New Synagogue in Prague. ;-) It seems, however, that we might be on the cusp of getting a generalist digital intelligence both highly capable and autonomous in the next ~5 years. One can entertain both concerns at the same time: being worried about the future of the technology _and_ the present harm, as you put it. In fact, I think a lot of people in the 'safety' community already do this. If those predictions come to pass, i.e., if we really get something remotely like the AGI described in the literature, we'll have to deal with those problems sooner than later. Ignoring them seems to me like ignoring future heat waves due to climate change by arguing that we should only care for present harms. It seems that you dismiss those concerns, your Twiter bio reads 'The world won't end', so it seems to me that beyond thinking that AGI stories are too hypothetical to be discussed, you also think that this framing of the technology is very unlikely to be achievable. I hope you are correct and I am wrong. I really do. But I am not convinced and by not laying out a detailed account of the motives you lose a potential sympathetic reader (me), and give more space to people you believe to be completely wrong and even harmful.
Best,
Igor.
I have the same question - I'm intrigued but this post, but having trouble understanding what your message is or what exactly you are critiquing about their stance.
What's your defense of writing back-to-back books with only a couple of years between each other with the titles "AI Snake Oil" and "AI as Normal Technology"? How do you defend renaming the newsletter?
Seems like they updated their view, as thoughtful people tend to sometimes do. I don't have a problem with that. I assume they thought the snake oil message was most helpful at the time, and now they think the normal tech view is more helpful.
The best example I can think of is electricity, which gradually transformed from a mainstay of traveling theatrical performers and "medical electricians" to a "normal technology." Depending on what means by "snake oil," a term I've always rather hated, I think this is an extremely common trajectory.
I also hate the term snake oil and already regret using it in this post. Sometimes when I strive for parallelism, I get stuck using imprecise jargon.
The thing that I'm stuck on for these examples: I want examples that start off as literal scams and then become mainstream treatments. Both MLM and crypto fall under this category. And patent medicines may also fit?
But a lot of scammers have an element of true believer in them (like supplement pushers). And a lot of things that people consider "snake oil" (sorry) like massage, chiropractic adjustment, or cupping, all "work" for professional athletes even though no one has an RCT validating their "effectiveness."
The other pattern you mentioned, where scams become "normal technologies" in a derrogatory sense by institutionalizing them, was a certain preoccupation of my advisor, who wrote a book on the polygraph: https://bookshop.org/a/111531/9780803224599
Oooh. great example. Forensics more broadly seems riddled with rather suspicious practices.
And a lot of things that people consider "snake oil" (sorry) like massage, chiropractic adjustment, or cupping, all "work" for professional athletes even though no one has an RCT validating their "effectiveness."
Not to do some critical thinking 101 with your assertion above, but this is pretty poor epistemological practice. The upshot of a lack of good evidence for those practice, including RCTs, doesn't mean we should take their use among professional athletes as good evidence that they work or that they work for professional athletes. By that reasoning, everything professional athletes do to aid their performance "works".
Professional athletes do countless things that may or may not work and may in fact be detrimental in some contexts vs others. Just look at the literature around ice-baths. Or look at all kinds of things previous Tour de France riders were doing to improve their performance, like training fasted, which was very common 10yrs or so ago and which we now know is worse than training unfasted. Maybe by "work" you mean give them a psychological edge through belief? So something like the placebo effect?
What I love about science-based epistemology is how it convinces its adherents that the practices of the most successful people on earth are akin to superstitious witchcraft.
So your position is that everything Lebron James and Tom Brady do for recovery work?
Feyerabend might say Galileo’s telescope is another case, it looked like a gimmick at first. Astronomers dismissed the early images as distortions, not evidence, and the whole enterprise had a whiff of snake oil. But within a generation it became the baseline "normal" technology of astronomy, precisely because scientists and institutions decided to trust and refine it.
I didn't realize you felt like there was much air between you and them and am curious as to what the bullets points there might be- I've felt you were all part of a 'common sense resistance' to the whole climate. The rebranding is dumb, but both angles (and, I would have thought, yours) strike me as taking vital swings at the idea that LLMs et al. are everything their variously messianic and/or cynically-addled by investor subsidies boosters insist- a group that somehow manages to include most people worried they'll cause extinction events.
I took it as a given that 'normal' and 'snake oil' were routinely overlapping categories- we're in an age where the global pool of money is so big and so bored that essentially any marginal technology or business model can be subsidized by investors until it's so embedded that questions of utility or profitability in any old-fashioned sense are almost peripheral. So I'd answer your question with 'every app you've ever seen an ad for.' Uber comes to mind- relies on 15 years of some of the largest burn rates in history to predatory-dump its way into control of the hired car market, powered in part by nakedly dishonest promises it'd have robot cars somewhere at the halfway mark, finally turns a middling profit, and now...it's here, I guess, getting me to work when my car is busted at the hands of a driver that might still be turning a loss in some analyses.
I'd stick online sport betting in that same boat too- there's a case to be made that certain gambling prohibition regimes are too hypocritical and joyless to be worth the trouble in a harm reduction sort of way, but that got pried open to a circumstance where it's normal for all the 21 year old sports hounds in a shared house to be ten seconds away from funneling their paychecks straight into their phones in a way that I think more thoughtful are united in thinking feels like a hustle.
I hear you that we're on the same side, but it's healthy that negative polarization doesn't force us to agree. I don't know if it's worth writing a whole post about this, but I feel like I almost always disagree with Arvind.
This is too simplistic, but there's here's a subset of a long list of positions he takes that I would be on the opposite side of:
- lack of anonymity in the Netflix prize is a crisis
- bitcoin is deep and useful and good technology
- machine learning is causing replication crises
- machine learning papers should abide by research checklists
- all predictive optimization is bad
- we should do superforecasting about AI futures
- the ai snake oil frame
- the entire normal technology essay
Fair- and it looks like I'm on your side of all that list (and thought Arvind was too for the bitcoin bit- I could've sworn there was a line in the normal technology paper disowning it, but I must tangling things up), with the exception of not liking the 'snake oil' frame, about which I'm genuinely curious, just because the whole atmosphere has felt to me like three card monty from the jump- the dicey logic of 'this is so dangerous we need to get it to market first', the flurry of money changing hands with an extra layer of spicy accounting to make it look like even more money is changing hands, the blitz to build integrations no one asks for before they realize they don't want them, the gulf between gamed benchmarks and worked real world utility- I've felt like PT Barnum was here the whole time. I don't want to steal your time here, but is there something about that shorthand of snake oil you feel is especially unhelpful? Is it that is implies the technology has no ability vs. a potential problematic amount, or...?
How about medicine and healthcare?
Complicated! I am sure healthcare is riddled with positive and negative examples. Yesterday, Jeff Lockhart told me "snake oil was a decent topical pain reliever brought to the US by Chinese rail workers, then other people in the US started selling ineffective knockoff products and it became synonymous with 'fraud.'"
The history of pharmaceuticals is probably exactly where I should look for positive examples. For a while it seemed like every chemical was a miracle cure, but then we had to build an FDA to sort the curatives from the poisons.
Yes! Amusingly the original complaint about "snake oil" was not that snake oil was ineffective, but that what was being sold didn't contain actual snake biproducts! https://www.smithsonianmag.com/innovation/how-snake-oil-became-a-symbol-of-fraud-and-deception-180985300/
It's essential that cynical rebranding and ricketty 'thought-leadership' is called out on Substack.
Banger. But also they’re likely doing the smart thing in modern marketing.
If you enjoy The Mirror Makers, you may want to check out Propaganda by Edward Bernays. It fits the theme and may be potentially more harrowing in this era.
The “normal technology” analogy I have used recently when discussing AI is cars: “normal,” ubiquitous, undeniably useful in many instances, and yet also responsible for mass death, anti-social behavior, horrible urban design, and climate ruin when compared with alternative technologies for moving lots of people around.
Ah yes, and soon we'll have the best of both worlds with AI cars.
I go back and forth on whether the phenomenon you point to is one of technology or just how society must work. People generally want to raise their social position relative to others (also companies, governments, departments, etc). If one player in the status game finds a way to raise their relative position, others will try to copy. This in turn makes a previous status symbol into table stakes once adoption becomes widespread. This phenomenon is often mediated by technology, but not always (look at how much more education people have now than 50, 100, or 200 years ago for instance). So is it a technology problem or a society problem?
This is not meant to be political, but vaccines were highly controversial / borderline quackery when they were first developed.
I mean, to prevent smallpox, you put cowpox on a dirty needle and scrape a child with it... seems bad man!
Some people claim snake oil could be useful but only water snakes in china have the chemicals.
https://www.pharmacytimes.com/view/fun-fact-what-was-snake-oil-used-to-treat-in-the-american-west-in-the-19th-century
Loved this!! The “normal technology” label isn’t an observation; it’s more of a laundering function. When institutions call something “normal,” they’re signaling that its externalities have been politically priced in, not that the harms are solved.
“Snake oil → normal” is rarely about the artifact; it’s about claims, incentives, and liability. The pattern I see is a legitimacy S-curve:
1. Invention + speculation: wild, totalizing promises; zero accountability.
2. Institutionalization (high capture risk): standards appear, but are written by beneficiaries; harms are reframed as “user error.”
3. Legitimacy (scarce): claims narrow, measurement hardens, liability attaches, independent audits bite. Only here should we call it “normal”—and even then, it’s provisional.
Advertising, MLMs, and now large swaths of the crypto industry have never reached stage 3. They stabilized in stage 2 via regulatory enablement and rent-seeking. That’s the “derogatory normality” you’re naming.
If you want a rare counterexample: electrotherapy. A century ago, it was rife with quack cure-alls. Today, tightly scoped modalities (TENS for analgesia, cardiac pacing/defibrillation, and targeted neurostimulation) are evidence-based, audited, and liability-bearing. Same energy, radically narrowed claims. It “normalized” only after precision, provenance, and accountability were enforced.
Generative AI is stuck between 1 and 2. Personified agents, homework completers, and clinical scribes are spreading faster than legitimacy mechanisms. Calling that “normal” imports the ad-tech political economy by default.
What to do instead of vibes-based normalizing:
-- Narrow the claim surface. No “general intelligence” posturing; requires task-bounded, falsifiable performance specs with context of use.
-- Provenance by design. Training data lineage and model bill of materials as a precondition for procurement.
-- Assignable liability. If an AI system provides advice or drafts a clinical note, a named party assumes the downside risk by contract and law.
-- Independent audits with teeth. Third-party testing against declared claims, harm reporting registries, and recall authority for noncompliance.
-- Fiduciary duty for advice agents. If it talks like a counselor, it must act in the user’s best interest, no covert ad targeting or behavioral shaping.
-- Structural separation. Don’t let the engagement business own the model serving safety-critical domains. (We learned this the hard way with ads.)
The real question isn’t “Is AI snake oil or normal?” It’s “Who gets to define normal, on what metrics, and with what consequences when they’re wrong?” Without precision, provenance, and enforceable liability, “normal” means “profitable for incumbents.” With them, we have a shot at the rare, non-derogatory kind of normal.
I've read this post twice now and I'm still not sure I understand your differences with Kapoor and Narayanan. As I see it, there is no contradiction from moving from "snake oil" to "normal technology"; the former was about deflating overblown claims about the transformative powers of AI; and the latter is about a framework through which we can think about AI adoption and building safeguards.
I also don't quite understand the different analogies to cryptocurrency and advertising. Crypto is obviously bad if the goal is to create speculative assets; but there might be other uses of the underlying technology. And advertising, well. Advertising is just advertising; it employs people, and at the margin, it makes people buy stuff. And if the biggest use of machine learning is in advertising, that's fine by me though other people's mileages may vary.
If I may indulge in a bit of arm chair psychologizing, I wonder if this has to do with the fact that Kapoor and Narayanan are programmatizers. The way I see it, the framework of "normal technology" is a way to create a program that can unite a bunch of tech criticisms that are somewhat disparate while also keeping on board people who see some promise in AI. That seems to me to be a good thing. As I see it, you don't particularly care for programs, e.g. you don't like research checklists. Or maybe you just don't care for this one. As politics though, given that many more people would like to keep working on AI and believe in putting it to various uses, I think the "normal technology" move is a good one that will allow conversation between a range of different actors.
As for AI being used for "automatic coursework completion," I am as distraught as you that LLMs have made the paper much harder to use as an assessment tool. But I think there is a solution sitting right in front of our faces that doesn't require any particularly overt stance on the "goodness" of AI (which gets us into all sorts of distracting questions like: Is Silicon Valley good? What about capitalism? etc. that don't actually do much to solve the actual problem happening right now). Like any socio-technical solution though, it does require collective action from us. https://computingandsociety.substack.com/p/how-do-you-solve-a-problem-like-chatgpt
I'm having trouble getting past your first paragraph. It's absolutely insane to, over the course of a year, decide that snake oil is normal technology. The correct move is to make snake oil illegal, not normalize it.
Now, perhaps I'm just upset with Arvind being a brandmaster. He's always going to choose what's catchy over what's accurate. But he can be called out for his incoherent branding and persistent opportunitistic bouncing to be the smartest truth-sayer on the latest fads.
But you are also right that I think his reductive programmatization is harmful. For example, this paper is a great example of wall-to-wall misguided and terrible ideas about a crisis that doesn't exist.
https://arxiv.org/abs/2308.07832
I wrote a long list of my disagreements with Arvind in another comment: https://open.substack.com/pub/argmin/p/how-snake-oil-becomes-normal-technology?utm_campaign=comment-list-share-cta&utm_medium=web&comments=true&commentId=154457096
I guess this counts as a use case of crypto as normal technology ? https://sites.lsa.umich.edu/mje/2025/01/08/peso-preservation-argentinas-embrace-of-stablecoins-for-economic-stability/
This is not quite what you're asking for, but a couple of years ago I caught Covid just after I arrived to spend a few months at the University of Edinburgh. Stuck in my AirBnB I read about the evolution of the steam engine (which largely happened in that area), and was struck by the parallels with modern NLP and AI. There was the same mad combination of science and engineering and crazy experimentation we see today, as well as blatant chalatans trying to make a quick quid.
Many of the people experimenting with steam engines were extremely interested in science and looked there for inspiration, but of course the science wasn't there. It would take decades`````````` for thermodynamics and statistical mechanics to be developed.