Get the latest tech news

Seven replies to the viral Apple reasoning paper and why they fall short


Also: another paper that seals the deal

The LRMs failed on Tower of Hanoi with 8 discs, where the optimal solution is 255 moves, well within so-called token limits; (ii) well-written symbolic AI systems generally don’t suffer from this problem, and AGI should not either. The real news here, aside from the fact that this was a clever study nailing down an important point, is that people are finally starting to pay attention, to (one of the) two biggest Achilles’ Heels of generative AI, and to appreciate its significance. Gary Marcus, professor emeritus at NYU, and author of The Algebraic Mind and “Deep learning is hitting a wall”, both of which anticipated the correct results, is thrilled to see people finally realize that scaling is not enough to get us to AGI.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Apple

Apple

Photo of replies

replies

Related news:

News photo

Apple will repair some Mac minis powered by M2 chips for free

News photo

Apple Launches 2023 Mac Mini Repair Program Due to Power Issue

News photo

Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response