Get the latest tech news

Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?


When large language models are aligned via supervised fine-tuning, they may encounter new factual information that was not acquired through pre-training. It is often conjectured that this can teach the model the behavior of hallucinating factually incorrect responses, as the model is trained to generate facts that are not grounded in its pre-existing knowledge. In this work, we study the impact of such exposure to new knowledge on the capability of the fine-tuned model to utilize its pre-existing knowledge. To this end, we design a controlled setup, focused on closed-book QA, where we vary the proportion of the fine-tuning examples that introduce new knowledge. We demonstrate that large language models struggle to acquire new factual knowledge through fine-tuning, as fine-tuning examples that introduce new knowledge are learned significantly slower than those consistent with the model's knowledge. However, we also find that as the examples with new knowledge are eventually learned, they linearly increase the model's tendency to hallucinate. Taken together, our results highlight the risk in introducing new factual knowledge through fine-tuning, and support the view that large language models mostly acquire factual knowledge through pre-training, whereas fine-tuning teaches them to use it more efficiently.

View a PDF of the paper titled Does Fine-Tuning LLMs on New Knowledge Encourage Hallucinations?, by Zorik Gekhman and Gal Yona and Roee Aharoni and Matan Eyal and Amir Feder and Roi Reichart and Jonathan Herzig View PDF Abstract:When large language models are aligned via supervised fine-tuning, they may encounter new factual information that was not acquired through pre-training. To this end, we design a controlled setup, focused on closed-book QA, where we vary the proportion of the fine-tuning examples that introduce new knowledge.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of tuning LLMs

tuning LLMs

Photo of hallucinations

hallucinations

Photo of new knowledge

new knowledge

Related news:

News photo

Pivot to AI: Hallucinations worsen as the money runs out

News photo

Microsoft’s new safety system can catch hallucinations in its customers’ AI apps

News photo

Glass supercharges smartphone cameras with AI — minus the hallucinations