Get the latest tech news

Wait a minute! Researchers say AI's "chains of thought" are not signs of human-like reasoning


A research team from Arizona State University warns against interpreting intermediate steps in language models as human thought processes. The authors see this as a dangerous misconception with far-reaching consequences for research and application.

For example, some scientists have tried to make these "chains of thought" more interpretable or have used features like their length and clarity as measures of problem complexity, despite a lack of evidence supporting these connections. A research group at Arizona State University cautions that the intermediate steps produced by language models are simply statistically generated text, not signs of human-like thinking or understanding. The authors argue that treating these "chains of thought" as human reasoning can lead to misunderstandings, poor research practices, and misplaced trust in our ability to interpret or control AI systems.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Signs

Signs

Photo of Human

Human

Photo of Minute

Minute

Related news:

News photo

Researchers Warn Against Treating AI Outputs as Human-Like Reasoning

News photo

'Some Signs of AI Model Collapse Begin To Reveal Themselves'

News photo

A thought on JavaScript "proof of work" anti-scraper systems