Get the latest tech news
Will AI think like humans? We're not even close - and we're asking the wrong question
The holy grail of AI has long been to think and reason as humanly as possible. Large reasoning models, while not perfect, offer a tentative step in that direction.
But large language models (LLMs) and their slightly more advanced LRM offspring operate on predictive analytics based on data patterns, not complex human-like reasoning. Typically, "LRMs excel at tasks that are easily verifiable but difficult for humans to generate -- areas like coding, complex QA, formal planning, and step-based problem solving," said Huang. The goal of LRMs, and ultimately AGI, is to "build toward AI that's transparent about its limitations, reliable within defined capabilities, and designed to complement human intelligence rather than replace it," Xiong said.
Or read this on ZDNet