Get the latest tech news
AI hallucinations are getting worse – and they're here to stay
An AI leaderboard suggests the newest reasoning models used in chatbots are producing less accurate results because of higher hallucination rates. Experts say the problem is bigger than that
AI chatbots from tech companies such as OpenAI and Google have been getting so-called reasoning upgrades over the past months – ideally to make them better at giving us answers we can trust, but recent testing suggests they are sometimes doing worse than previous models. A model that consistently states falsehoods and requires fact-checking won’t be a helpful research assistant; a paralegal-bot that cites imaginary cases will get lawyers into trouble; a customer service agent that claims outdated policies are still active will create headaches for the company. The Vectara team pointed out that, although the DeepSeek-R1 model hallucinated 14.3 per cent of the time, most of these were “benign”: answers that are factually supported by logical reasoning or world knowledge, but not actually present in the original text the bot was asked to summarise.
Or read this on r/technology