In New York Magazine this week, one of my favorite tech columnists, John Herrman, wrote a long piece describing the narrative behind “Artificial General Intelligence (AGI).” I really like the way he summarized his piece on Bluesky:
“AGI is a story, not a technology. I’m not arguing against capabilities here, or even the markets. The stories we tell about new technologies really do determine how we end up using them, and how they get used on us.”
Coincidentally, one of my favorite substackers, Jasmine Sun, also dipped her toe into the AGI discourse, writing her first attempt to make sense of the narrative coming from academia and industry alike. Her distillation of “the story” ended up being very similar to Herrman’s.
Reporters end up retelling the same metastory about AGI every time the term bubbles its way into the news. Every article roughly goes like this:
Even before computers were invented, people feared the potential of machines to supplant labor or defy their creators. We can point to several times in the history of computing where computer scientists were convinced that their computers would achieve sentient capabilities in only a matter of years. It never happened before, but computers are now so powerful and their artifacts now so impressive that perhaps it will finally happen this decade. Let’s take that possibility seriously.
Critical articles (i.e., those not written by Kevin Roose) often discuss how this story abounds in science fiction. They might point to AGI fears in popular culture centuries before the first computer—the story of the golem, the story of Frankenstein, the story of Rossum’s Universal Robotics, the story of the Garden of Eden. Despite this trope being present throughout literary and technological history, tech companies want us to believe this time it’s different.
Now, I agree with Herrman that these stories about technology shape our relationship to the products and artifacts. But we can tell many stories about AGI, and we shouldn’t focus on the story that the AI companies want us to keep telling, where robots overthrow their masters. My preferred story casts these products and companies in a different light: "What happens when a religious sect takes power inside an unprecedentedly powerful oligopoly at the nadir of a global empire?"
You don’t have to dig too deep into science fiction to find this trope. This is the story of the Bene Gesserit and CHOAM in Dune. Disturbingly closer to our current world, it’s the story of the Cult of Asherah and L. Bob Rife’s telecommunications conglomerate in Snow Crash. It’s the Eagan Religion and Lumon in Severance. The story is also masterfully told in an epic two-episode arc of South Park.1
The religious sect in question here is AGI-theism, those who believe that they can create superintelligence. Their belief system has been honed by years of drug-addled betterment sessions and house parties. They have been propped up by Silicon Valley patronage, and have centers scattered across the Bay Area.2
I’ve been around long enough to have seen the repeated iterations of this sect’s proclamations of the Singularity.
“Our system beat Atari” - AGI
“Our system beat Lee Sedol at Go” - AGI
“Our system beat some people at DOTA” - AGI
“Our system solved a Rubik's Cube” - AGI
“Our system is the best chatbot ever” - AGI
The last one is different. But it also points to how privileged these guys3 were. They had the backing of an unprecedentedly resilient oligopoly that has had a chokehold on American capital for two decades. With access to infinite resources, these adherents ran reinforcement learning on their companies. They threw billions of dollars' worth of spaghetti at the wall until something stuck. Analogous to the prosperity gospel, engineering works were evidence of salvation.
In 2019, I made a reading list to introduce non-AI experts to AI. All of the articles were about AGI-theism. There’s Ted Chiang's critique of AGI-theism as a way to understand Silicon Valley’s unease with its god complex. Maciej Cegłowski's Superintelligence: The Idea That Eats Smart People highlights similar religious themes. In the Harvard Data Science Revierw, Stephanie Dick details the history of our conceptions of artificial vs natural intelligence. And David Leslie's scathing review of Stuart Russell's book discusses how ideological marketing and fear-mongering are cast as scientific debate.
Despite the half-decade of impressive chatbot results, these articles hold up remarkably well. They explain the mindset of the AGI technopoly. The AGI god shares the values of the Silicon Valley founder, funder, and engineer. As Chiang and Ceglowski both point out, those values are of resource aggregation, constant optimization, economic productivity, and blind maximization of “reward.” This is Silicon Valley, and they think they are building a more powerful version of themselves.
Telling the story as one of AGI-theism, rather than one of a techlash, gives a much different perspective of how we engage with our new technologies. Just because they had a big success, it doesn’t mean their religion is correct, nor that we have to accept its tenets. Instead, it’s critical to receive the technology, when it works and when it doesn’t, as being produced by people who want you to believe.
The AGI-theist narrative perhaps would have reshaped Sun’s essay. She begins her deep dive into AGI by referencing the meme “What did Ilya see?” This rhetorical question assumes Ilya Sutskever, one of the founders of OpenAI, knew that there was some sort of superpowerful, superdangerous technology that Sam Altman wanted to release without the Board’s permission. Whatever Ilya saw led to the firing of Sam Altman in November 2023. Sun is rightfully dismissive of this “rhetoric,” but then credulously explores the belief system in an attempt of good-faith assessment.
However, an alternative path would have instead explored what Ilya did see. A new book by Keach Hagey and reporting by Nitasha Tiku’s reveal that it wasn’t superintelligence. They describe political skirmishes and psychodrama amongst a set of very nerdy priests of a corrupt AGI-theist conclave. What Ilya saw ended up being pretty mundane: That Sam Altman was a pathologically lying asshole.
It’s also the story of Catholicism and the Roman Empire.
There’s one a block away from my house!
Like many sectarian hierarchies, they’re almost all men.
The stories lens is great. The best (and to me, truest) alternative story right now I think is Simon Willison's electric bicycle for the mind:
"I've been thinking about generative AI tools as 'bicycles for the mind' (to borrow an old Steve Jobs line), but I think 'electric bicycles for the mind' might be more appropriate. They can accelerate your natural abilities, you have to learn how to use them, they can give you a significant boost that some people might feel is a bit of a cheat, and they're also quite dangerous if you're not careful with them!"
> With access to infinite resources, these adherents ran reinforcement learning on their companies. They threw billions of dollars' worth of spaghetti at the wall until something stuck. Analogous to the prosperity gospel, engineering works were evidence of salvation.
> An alternative path would have instead explored what Ilya did see. A new book by Keach Hagey and reporting by Nitasha Tiku’s reveal that it wasn’t superintelligence. They describe political skirmishes and psychodrama amongst a set of very nerdy priests of a corrupt AGI-theist conclave. What Ilya saw ended up being pretty mundane: That Sam Altman was a pathologically lying asshole.
!!! thanks for writing this up