Get the latest tech news
Do reasoning AI models really ‘think’ or not? Apple research sparks lively debate, response
Ultimately, the big takeaway for ML researchers is that before proclaiming an AI milestone—or obituary—make sure the test itself isn’t flawed
Alexander Doria aka Pierre-Carl Langlais, an LLM trainer at energy efficient French AI startup Pleias, said the framing misses the nuance, arguing that models might be learning partial heuristics rather than simply matching patterns. Ethan Mollick, the AI focused professor at University of Pennsylvania’s Wharton School of Business, called the idea that LLMs are “hitting a wall” premature, likening it to similar claims about “model collapse” that didn’t pan out. Meanwhile, critics like@arithmoquine were more cynical, suggesting that Apple—behind the curve on LLMs compared to rivals like OpenAI and Google—might be trying to lower expectations,” coming up with research on “how it’s all fake and gay and doesn’t matter anyway” they quipped, pointing out Apple’s reputation with now poorly performing AI products like Siri.
Or read this on Venture Beat