Get the latest tech news

New paper pushes back on Apple’s LLM ‘reasoning collapse’ study


Apple’s recent AI research paper, “The Illusion of Thinking”, has been making waves, but not everyone agrees with its conclusion.

Apple’s recent AI research paper, “ The Illusion of Thinking ”, has been making waves for its blunt conclusion: even the most advanced Large Reasoning Models (LRMs) collapse on complex tasks. Today, Alex Lawsen, a researcher at Open Philanthropy, published a detailed rebuttal arguing that many of Apple’s most headline-grabbing findings boil down to experimental design flaws, not fundamental reasoning limits. Evaluation scripts didn’t distinguish between reasoning failure and output truncation: Apple used automated pipelines that judged models solely by complete, enumerated move lists, even in cases where the task would exceed the token limit.

Get the Android app

Or read this on r/apple

Read more on:

Photo of Study

Study

Photo of new paper

new paper

Photo of reasoning collapse

reasoning collapse

Related news:

News photo

Potential Alzheimer's treatment is a scalpel, not a sledgehammer | Researchers have conducted a proof-of-concept study showing how similar compounds can precisely target protein tangles and plaques in the brain.

News photo

Cutting-edge AI models from OpenAI and DeepSeek undergo 'complete collapse' when problems get too difficult, study reveals

News photo

The Dogs of Chernobyl Are Experiencing Rapid Evolution, Study Suggests