Get the latest tech news
Less is more: Meta study shows shorter reasoning improves AI accuracy by 34%
New research from Meta reveals AI models achieve 34.5% better accuracy with shorter reasoning chains, challenging industry assumptions and potentially reducing computing costs by 40%.
“While demonstrating impressive results, [extensive reasoning] incurs significant computational costs and inference time,” the authors note, pointing to a substantial inefficiency in how these systems are currently deployed. “Our findings suggest rethinking current methods of test-time compute in reasoning LLMs, emphasizing that longer ‘thinking’ does not necessarily translate to improved performance and can, counter-intuitively, lead to degraded results,” the researchers conclude. It also builds upon recent work like Princeton and Google DeepMind’s “ Tree of Thoughts ” framework and Carnegie Mellon’s “ Self-Refine ” methodology, which have explored different approaches to AI reasoning.
Or read this on Venture Beat