Get the latest tech news
Is telling a model to "not hallucinate" absurd?
Is telling a model to "not hallucinate" absurd? GitHub Gist: instantly share code, notes, and snippets.
Presumably, "retrieving from memory" and "improvising an answer" are two different model behaviors, which use different internal mechanisms. So, given these pieces of information, yes, LLM can be trained to reduce hallucinations upon request. Maybe it's absurd in the sense that you have to explicitly request it, and that the models weren't trained to always reduce hallucinations.
Or read this on Hacker News