Discussion about this post

User's avatar
rif a saurous's avatar

Savage's shadow looms large. "Mathematically rational behavior" is a small-world notion. We live in a large world. When LLMs respond to and generate natural language, they also operate there. In our large word, "rational behavior" is still vaguely pointing at something, but the emphasis is on vaguely: it's often used to analyze whether someone is acting "mathematically rationally" in some particular conjectured projection of the large world into a particular small world, and often the choice of projection is itself contentious.

So of course LLMs aren't "mathematically rational" in the large world, since that's not even a thing. The issue is (as usual) pretending a term that makes sense in the small world carries over uncritically to the large.

Mark A's avatar

One thing I've read that I find so interesting is that it is the Chinese companies that are pursuing much more industrial applications of these systems, in robotics and manufacturing, while it's the hyper capitalists who've either become obsessed with this vague notion of AGI and superintelligence to build ever more complex LLMs or pay lip service to such a goal. I can't quite articulate the thought, but for America, you'd think the actual industrial application would drive the innovation but instead we get Chat GPT 5, which makes the sort of mistake you highlight -- which seems to definitively demonstrate that LLMs are just stochastic parrots and don't understand the text they produce, any more than the image-producing systems understand the images they output.

8 more comments...

No posts

Ready for more?