Get the latest tech news
When billion-dollar AIs break down over puzzles a child can do, it’s time to rethink the hype
The tech world is reeling from a paper that shows the powers of a new generation of AI have been wildly oversold, says cognitive scientist Gary Marcus
In short, these models are very good at a kind of pattern recognition, but often fail when they encounter novelty that forces them beyond the limits of their training, despite being, as the paper notes, “explicitly designed for reasoning tasks”. The new paper also echoes and amplifies several arguments that Arizona State University computer scientist Subbarao Kambhampati has been making about the newly popular LRMs. One of the most striking findings in the new paper was that an LLM may well work in an easy test set (such as Hanoi with four discs) and seduce you into thinking it has built a proper, generalisable solution when it has not.
Or read this on r/technology