Get the latest tech news
Study finds ChatGPT-5 is wrong about 1 in 4 times — here's the reason why
AI chatbots don’t just make things up, they’ve been trained and rewarded to do it
(Image credit: Shutterstock)Research points to a structural issue causing hallucinations; essentially the problem stems from benchmarks and leaderboards ranking AI models and rewarding confident answers. That could mean your future chatbot might hedge more often, less “here’s the answer” and more “here’s what I think, but I’m not certain.” It may feel slower, but it could dramatically reduce harmful errors. And for developers, this is a sign that it's time to rethink how we measure success so that future AI assistants can admit what they don’t know instead of getting things completely wrong.
Or read this on r/technology