Get the latest tech news

LLMs Aren't World Models


I believe that language models aren’t world models. It’s a weak claim — I’m not saying they’re useless, or that we’re done milking them.

I only mean that, having read a trillion chess games, LLMs, specifically, have not learned that to make legal moves, you need to know where the pieces are on the board. Then LLMs were flogged to become “good at math,” and now they might say something about “Peano axioms,” and some total garbage about set theory — but they emit enough S(S(2)) and such that it probably counts as a proof, though I am yet to see the simple “2+2 = 2+(1+1) = (2+1)+1 = 3+1 = 4” which I’d expect from an entity understanding the question.) I have conflicting theories about why some people do great things with “agentic AI” while I think it’s hopelessly useless for me; I am waiting for someone to write something crisp and well-researched about this to teach me the truth, or a useful approximation.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of world models

world models

Related news:

News photo

I clustered four Framework Mainboards to test LLMs

News photo

New paper reveals Chain-of-Thought reasoning of LLMs a mirage

News photo

Blocking LLMs from your website cuts you off from next-generation search