Get the latest tech news
Generative AI's crippling failure to induce robust models of the world
LLM failures to reason, as documented in Apple’s Illusion of Thinking paper, are really only part of a much deeper problem
Ernest Davis and I similarly stressed the central importance of world (cognitive) models in our 2019 book Rebooting AI, in an example of what happens in the human mind as one understands a simple children’s story. For what I think are mostly sociological reasons, people who have built neural networks such as LLMs have mostly tried to do without explicit models, hoping that intelligence would “emerge” from massive statistical analyses of big data. To take another example, I recently ran some experiments on variations on tic-tac-toe with Grok 3 (which Elon Musk claimed a few months ago was the “smartest AI on earth”), with only the only change being that we use y’s and z’s instead of X’s and O’s.
Or read this on Hacker News