Get the latest tech news
New paper pushes back on Apple’s LLM ‘reasoning collapse’ study
Apple’s recent AI research paper, “The Illusion of Thinking”, has been making waves, but not everyone agrees with its conclusion.
Apple’s recent AI research paper, “ The Illusion of Thinking ”, has been making waves for its blunt conclusion: even the most advanced Large Reasoning Models (LRMs) collapse on complex tasks. Today, Alex Lawsen, a researcher at Open Philanthropy, published a detailed rebuttal arguing that many of Apple’s most headline-grabbing findings boil down to experimental design flaws, not fundamental reasoning limits. Evaluation scripts didn’t distinguish between reasoning failure and output truncation: Apple used automated pipelines that judged models solely by complete, enumerated move lists, even in cases where the task would exceed the token limit.
Or read this on r/apple