Get the latest tech news
Seven replies to the viral Apple reasoning paper and why they fall short
Also: another paper that seals the deal
The LRMs failed on Tower of Hanoi with 8 discs, where the optimal solution is 255 moves, well within so-called token limits; (ii) well-written symbolic AI systems generally don’t suffer from this problem, and AGI should not either. The real news here, aside from the fact that this was a clever study nailing down an important point, is that people are finally starting to pay attention, to (one of the) two biggest Achilles’ Heels of generative AI, and to appreciate its significance. Gary Marcus, professor emeritus at NYU, and author of The Algebraic Mind and “Deep learning is hitting a wall”, both of which anticipated the correct results, is thrilled to see people finally realize that scaling is not enough to get us to AGI.
Or read this on Hacker News