Get the latest tech news

AI's not 'reasoning' at all - how this team debunked the industry hype


Researchers just got very specific about what a language model's 'chain of thought' is actually doing.

In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors -- Chengshuai Zhao and colleagues at Arizona State University -- took apart the reasoning claims through a simple experiment. The term "chain of thought" (CoT) is commonly used to describe the verbose stream of output that you see when a large reasoning model, such as GPT-o1 or DeepSeek V1, shows you how it works through a problem before giving the final answer. Since then, Altman's claims and various press releases from AI promoters have increasingly emphasized the human-like nature of reasoning using casual and sloppy rhetoric that doesn't respect Wei and team's purely technical description.

Get the Android app

Or read this on ZDNet

Read more on:

Photo of team

team

Photo of industry hype

industry hype

Related news:

News photo

OpenAI hires the team behind Xcode coding assistant Alex

News photo

Paramount and Activision Team For 'Call of Duty' Movie

News photo

Want a French-made RPG that takes inspiration from Japanese RPGs and was actually made by a team of 30? Edge of Memories should scratch that itch