Get the latest tech news

Why Claude's Comment Paper Is a Poor Rebuttal


Recently Apple published a paper on LRMs (Large Reasoning Models) and how they found that “that LRMs have limitations in exact computation” and that “they fail to use explicit algorithms and reason inconsistently across puzzles.” I would consider this a death blow paper to the current push for using LLMs and LRMs as the basis for AGI. Subbaro Kambhampati and Yann LeCun seem to agree. You could say that the paper knocked out LLMs. More recently, a comment paper showed up on Arxiv and shared around X as a rebuttal to Apple’s paper. Putting aside the stunt of having Claude Opus as a co-author (yes, I’m not kidding), the paper in itself is a poor rebuttal for many reasons which we shall explore, but mainly for missing the entire point of the paper and prior research by AI researchers such as Professor Kambhampati.

However, upon approaching a critical threshold—which closely corresponds to their accuracy collapse point—models counterintuitively begin to reduce their reasoning effort despite increasing problem difficulty" (Shojaee et al., p.8) It instead completely ignores this finding and offers no explanation as to why models would systematically reduce computational effort when faced with harder problems. Further, the rebuttal does not address the Apple paper’s complexity regime patterns that it identified consistently across models or even how token limits explain these.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Claude

Claude

Photo of comment paper

comment paper

Photo of poor rebuttal

poor rebuttal

Related news:

News photo

Building a WordPress MCP Server for Claude: Automating Blog Posts with AI

News photo

Field Notes from Shipping Real Code with Claude

News photo

I read all of Cloudflare's Claude-generated commits