Get the latest tech news
Researchers push back on Apple study: LRMs can handle complex tasks with the right tools
A new commentary from Pfizer researchers challenges the main claims of "The Illusion of Thinking," a study co-authored by Apple scientists that found large reasoning models (LRMs) struggle as tasks get more complex.
A new commentary from Pfizer researchers challenges the main claims of "The Illusion of Thinking," a study co-authored by Apple scientists that found large reasoning models (LRMs) struggle as tasks get more complex. Pfizer calls this "learned helplessness": when an LRM can't execute a long sequence perfectly, it may incorrectly conclude the task is unsolvable. Their own experiments found that with tool access, models solved harder puzzles, and o4-mini even showed metacognitive self-correction, which is an advanced problem-solving trait.
Or read this on r/apple