14 Comments
User's avatar
Raj Movva's avatar

It seems like OpenAI basically agrees with you! https://www.businessinsider.com/microsoft-openai-put-price-tag-achieving-agi-2024-12

Expand full comment
Ben Recht's avatar

They should send me a check! ;)

Expand full comment
Alex Tolley's avatar

"However, these quirks do demonstrate that whatever these models are doing, it’s not what we do. That shouldn’t count as a demerit. The fact that these models undeniably produce coherent language and don’t understand semantics or syntax is one of their most amazing features. "

LLMs to human language is as submarines to fish, and airplanes to birds.

"I use them in place of Google search, but that’s mostly because Google search is a crumbling husk of its former glory.³ These are all useful things. "

"What a massive fuckup by Google."

While I do not do that, apparently, the Gemini AI summary at the top of searches has reduced the reading of search links. What does Google do now if its main revenue model is being undermined by the AI summary? [ Couldn't happen to nicer crowd of ROT artists.]

Expand full comment
Ben Recht's avatar

"LLMs to human language is as submarines to fish, and airplanes to birds."

Certainly. Now what do we need to do to understand LLMs as well as we understand submarines and airplanes?

Expand full comment
Maxim Raginsky's avatar

I would actually quibble with this framing, but it could be rephrased to "LLMs are to human language is as submarines to swimming, and airplanes to flying." All of these then refer to amplifying or augmenting human capabilities in a particular domain of interest.

Expand full comment
Ben Recht's avatar

But last time I checked, people can't fly... ;)

Expand full comment
Maxim Raginsky's avatar

That's the augmentation part.

Expand full comment
Ruxandra Teslo's avatar

1) coding in like linux and other stuff that biologists still use for various tasks and which I cannot be bothered to learn. also stuff like helping me make nicer plots, sometimes ideas for making my code faster (I am not a soft eng, so I mostly do coding "opportunistically").

2) I also use them to remind me of things I vaguely remember like I know there is a scene in a book I had read that I forgot the details of.

3) much better google search

There are a lot of other small ways that do not immediately come to mind...

Overall, I would say I would never delegate any serious mental task to them in its entirety, but I have found them greatly productivity enhancing esp when it comes to boring tasks or getting started with smth I know nothing about

Expand full comment
Ben Recht's avatar

Yes, and what's interesting is that we talked about these a couple of years ago, and your use cases were similar then. They are definitely much better at these tasks today than two years ago! But their use cases have remained interestingly static.

Do you agree that the more these things are forced to be products, the less they seem like life threatening superintelligences?

Expand full comment
Ruxandra Teslo's avatar

but I am not one of the people running around freaking out about superintelligence killing us all!

Expand full comment
Ben Recht's avatar

Hah, I know. Of course not! I was just remarking on how technology becomes more mundane through use.

Expand full comment
Dylan Gorman's avatar

Something I've never really seen addressed but seems important to me is that the human brain has about the same power usage of a 60W light bulb, as compared to god knows how much for ChatGPT. This seems to be a pretty clear argument that whatever LLMs are doing, it's much more brute force than what humans are doing.

Expand full comment
Maxim Raginsky's avatar

Curious about what you (and Leif) mean by poetics. Is it in the Bakhtinian sense? Or is it more along the lines of Graeber's poetic technologies? (As for me, I am happy to settle for pragmatics; poetics can be left for later.)

Expand full comment