Get the latest tech news

OpenAI admits AI hallucinations are mathematically inevitable, not just engineering flaws


In a landmark study, OpenAI researchers reveal that large language models will always produce plausible but false outputs, even with perfect data, due to fundamental statistical and computational limits.

The study, published on September 4 and led by OpenAI researchers Adam Tauman Kalai, Edwin Zhang, and Ofir Nachum alongside Georgia Tech’s Santosh S. Vempala, provided a comprehensive mathematical framework explaining why AI systems must generate plausible but false information even when trained on perfect data. [ Related: More OpenAI news and insights] “Like students facing hard exam questions, large language models sometimes guess when uncertain, producing plausible yet incorrect statements instead of admitting uncertainty,” the researchers wrote in the paper. The OpenAI research identified three mathematical factors that made hallucinations inevitable: epistemic uncertainty when information appeared rarely in training data, model limitations where tasks exceeded current architectures’ representational capacity, and computational intractability where even superintelligent systems could not solve cryptographically hard problems.

Get the Android app

Or read this on r/technology

Read more on:

Photo of OpenAI

OpenAI

Photo of AI hallucinations

AI hallucinations

Photo of engineering flaws

engineering flaws

Related news:

News photo

Is OpenAI's Video-Generating Tool 'Sora' Scraping Unauthorized YouTube Clips?

News photo

ChatGPT Search is now smarter as OpenAI takes on Google Search

News photo

OpenAI's $4 GPT Go plan may expand to more regions