5 Comments
User's avatar
Onid's avatar

That last paragraph touches on something that’s always bothered me with all the talk about superforecasters and prediction markets. Rationalists talk about these things as if making them mainstream would increase our ability to predict the future with no adverse effects.

But to me it always seemed likely that the more mainstream these groups become, the more predictions themselves will influence events, and the more incentive there will be to manipulate them.

Expand full comment
brutalist's avatar

Predicting the future changes the future! I think on an abstract level, the authors of the AI 2027 novella would certainly agree- after all, the whole point of the exercise is to “reduce P(doom)” or however they’d describe it. But I don’t think they’ve really internalized what it means for their theory of forecasting.

I just really don’t understand why people think I should infer anything about their ability to predict unprecedented and vaguely defined events like “AGI by 2035” from their track record predicting more routine events like “winner of a presidential election with 75% confidence.”

Expand full comment
Kalen's avatar

I've been thinking a bit lately about how much of the cultural currency of the tech industry hinges on a carefully curated perception of inevitability. Technology is just stuff made by people; there is always going to be multiple worlds accessible from here based on what people decide to work on on one hand and use or prohibit on the other. But canting the discussion towards 'what technology wants' or 'what's going to happen' in a way that could be meaningfully and dispassionately predicted like a simple physical system really just leaves investing and buying first as the only moves on the board. 'What should people work on' is an empowering question, 'what do you guess you'll be able to buy' is not.

Expand full comment
Emin Orhan's avatar

This is a familiar tactic among rationalist longtermist AI doom-mongering circles. They will ask 6 people meaningless questions like: "how much faster would your company be going at its core research if every researcher in your company had 30 digital twins that could think 30 times faster" etc. Some will say ~1x, others will say ~100x and then instead of throwing away the whole thing as junk, they will say things like "the median researcher at the frontier labs think they can accelerate their research ~10x with superhuman coding agents" etc. and then they will pile meaningless number upon meaningless number like this to predict (voila!) "intelligence explosion" in 2027 or some such garbage.

Expand full comment
Jeremy Kun's avatar

It would be good form to disclose that you have a direct professional relationship with the author of the podcast you referred us to 😉

Expand full comment