Get the latest tech news
Researchers describe how to tell if ChatGPT is confabulating
Finding out whether the AI is uncertain about facts or phrasing is the key.
It's one of the world's worst-kept secrets that large language models give blatantly false answers to queries and do so with a confidence that's indistinguishable from when they get things right. Now, researchers from the University of Oxford say they've found a relatively simple way to determine when LLMs appear to be confabulating that works with all popular models and across a broad range of subjects. As the Oxford team defines them in their paper describing the work, confabulations are where "LLMs fluently make claims that are both wrong and arbitrary—by which we mean that the answer is sensitive to irrelevant details such as random seed."
Or read this on Hacker News