Get the latest tech news
AI's not 'reasoning' at all - how this team debunked the industry hype
Researchers just got very specific about what a language model's 'chain of thought' is actually doing.
In a paper published last month on the arXiv pre-print server and not yet reviewed by peers, the authors -- Chengshuai Zhao and colleagues at Arizona State University -- took apart the reasoning claims through a simple experiment. The term "chain of thought" (CoT) is commonly used to describe the verbose stream of output that you see when a large reasoning model, such as GPT-o1 or DeepSeek V1, shows you how it works through a problem before giving the final answer. Since then, Altman's claims and various press releases from AI promoters have increasingly emphasized the human-like nature of reasoning using casual and sloppy rhetoric that doesn't respect Wei and team's purely technical description.
Or read this on ZDNet