Get the latest tech news
Why RAG won’t solve generative AI’s hallucination problem
RAG is being pitched as a solution of sorts to generative AI hallucinations. But there's limits to what the technique can do.
RAG was pioneered by data scientist Patrick Lewis, researcher at Meta and University College London, and lead author of the 2020 paper that coined the term. RAG is undeniably useful — it allows one to attribute things a model generates to retrieved documents to verify their factuality (and, as an added benefit, avoid potentially copyright-infringing regurgitation). Things get trickier with “reasoning-intensive” tasks such as coding and math, where it’s harder to specify in a keyword-based search query the concepts needed to answer a request — much less identify which documents might be relevant.
Or read this on TechCrunch