Get the latest tech news
OpenAI's fix for hallucinations is simpler than you think
A new research paper details why models make stuff up - and how to fix it across the industry.
According to this method, admitting ignorance is judged as an inaccurate response, which pushes models toward generating what OpenAI describes as "overconfident, plausible falsehoods" -- hallucination, in other words. "Strategically guessing when uncertain improves accuracy but increases errors and hallucinations," OpenAI wrote in an accompanying blog post about its findings. Running a model through millions of examples on the proper arrangement of subjects, verbs, and predicates will make them more fluent in their use of natural language, but as any living human being knows, reality is open to interpretation.
Or read this on ZDNet