Get the latest tech news

OpenAI's fix for hallucinations is simpler than you think


A new research paper details why models make stuff up - and how to fix it across the industry.

According to this method, admitting ignorance is judged as an inaccurate response, which pushes models toward generating what OpenAI describes as "overconfident, plausible falsehoods" -- hallucination, in other words. "Strategically guessing when uncertain improves accuracy but increases errors and hallucinations," OpenAI wrote in an accompanying blog post about its findings. Running a model through millions of examples on the proper arrangement of subjects, verbs, and predicates will make them more fluent in their use of natural language, but as any living human being knows, reality is open to interpretation.

Get the Android app

Or read this on ZDNet

Read more on:

Photo of OpenAI

OpenAI

Photo of fix

fix

Photo of hallucinations

hallucinations

Related news:

News photo

Meta, OpenAI Face FTC Inquiry on Chatbot Impact on Kids

News photo

OpenAI, Oracle sign $300 billion computing deal, WSJ reports

News photo

OpenAI reportedly on the hook for $300B Oracle Cloud bill