Get the latest tech news

Study finds ChatGPT-5 is wrong about 1 in 4 times — here's the reason why


AI chatbots don’t just make things up, they’ve been trained and rewarded to do it

(Image credit: Shutterstock)Research points to a structural issue causing hallucinations; essentially the problem stems from benchmarks and leaderboards ranking AI models and rewarding confident answers. That could mean your future chatbot might hedge more often, less “here’s the answer” and more “here’s what I think, but I’m not certain.” It may feel slower, but it could dramatically reduce harmful errors. And for developers, this is a sign that it's time to rethink how we measure success so that future AI assistants can admit what they don’t know instead of getting things completely wrong.

Get the Android app

Or read this on r/technology

Read more on:

Photo of Study

Study

Photo of Times

Times

Photo of reason

reason

Related news:

News photo

These YC founders pivoted 5 times before building a social app that nabbed 300K users and over $1M ARR in 6 months

News photo

Disney decides it hasn’t angered people enough, announces Disney+ price hikes | In case you needed another reason to get rid of Disney+.

News photo

Enterprise AI projects aren’t producing value. Is ‘workslop’ one reason why?