Get the latest tech news
OpenAI Says Models Programmed To Make Stuff Up Instead of Admitting Ignorance
AI models often produce false outputs, or "hallucinations." Now OpenAI has admitted they may result from fundamental mistakes it makes when training its models. The Register: The admission came in a paper [PDF] published in early September, titled "Why Language Models Hallucinate," and penned by thr...
The Register: The admission came in a paper[PDF] published in early September, titled "Why Language Models Hallucinate," and penned by three OpenAI researchers and Santosh Vempala, a distinguished professor of computer science at Georgia Institute of Technology. The fundamental problem is that AI models are trained to reward guesswork, rather than the correct answer. It produced three incorrect results because the trainers taught the engine to return an answer, rather than admit ignorance.
Or read this on Slashdot