Get the latest tech news
Wait a minute! Researchers say AI's "chains of thought" are not signs of human-like reasoning
A research team from Arizona State University warns against interpreting intermediate steps in language models as human thought processes. The authors see this as a dangerous misconception with far-reaching consequences for research and application.
For example, some scientists have tried to make these "chains of thought" more interpretable or have used features like their length and clarity as measures of problem complexity, despite a lack of evidence supporting these connections. A research group at Arizona State University cautions that the intermediate steps produced by language models are simply statistically generated text, not signs of human-like thinking or understanding. The authors argue that treating these "chains of thought" as human reasoning can lead to misunderstandings, poor research practices, and misplaced trust in our ability to interpret or control AI systems.
Or read this on r/technology