Get the latest tech news

On the Existence, Impact, and Origin of Hallucination-Associated Neurons in LLMs


Large language models (LLMs) frequently generate hallucinations -- plausible but factually incorrect outputs -- undermining their reliability. While prior work has examined hallucinations from macroscopic perspectives such as training data and objectives, the underlying neuron-level mechanisms remain largely unexplored. In this paper, we conduct a systematic investigation into hallucination-associated neurons (H-Neurons) in LLMs from three perspectives: identification, behavioral impact, and origins. Regarding their identification, we demonstrate that a remarkably sparse subset of neurons (less than $0.1\%$ of total neurons) can reliably predict hallucination occurrences, with strong generalization across diverse scenarios. In terms of behavioral impact, controlled interventions reveal that these neurons are causally linked to over-compliance behaviors. Concerning their origins, we trace these neurons back to the pre-trained base models and find that these neurons remain predictive for hallucination detection, indicating they emerge during pre-training. Our findings bridge macroscopic behavioral patterns with microscopic neural mechanisms, offering insights for developing more reliable LLMs.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of Origin

Origin

Photo of hallucinations

hallucinations

Related news:

News photo

LLMs can be prompt-injected to give bad medical advice, including thalidomide to a pregnant woman

News photo

AI in Focus in 2026, Traders Look Past LLMs

News photo

Debug Mode for LLMs in vLLora