Get the latest tech news

Is telling a model to "not hallucinate" absurd?


Is telling a model to "not hallucinate" absurd? GitHub Gist: instantly share code, notes, and snippets.

Presumably, "retrieving from memory" and "improvising an answer" are two different model behaviors, which use different internal mechanisms. So, given these pieces of information, yes, LLM can be trained to reduce hallucinations upon request. Maybe it's absurd in the sense that you have to explicitly request it, and that the models weren't trained to always reduce hallucinations.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of model

model

Related news:

News photo

PlayStation’s Astro Bot Is a Model for the Video-Game Industry

News photo

Have we stopped to think about what LLMs model?

News photo

Hermes 3: The First Fine-Tuned Llama 3.1 405B Model