Get the latest tech news

Does RAG make LLMs less safe?  Bloomberg research reveals hidden dangers


RAG is supposed to make enterprise AI more accurate, but it could potentially also make it less safe according to new research.

The research challenges widespread assumptions that retrieval-augmented generation (RAG) enhances AI safety, while demonstrating how existing guardrail systems fail to address domain-specific risks in financial services applications. Leaders must move beyond viewing guardrails and RAG as separate components and instead design integrated safety systems that specifically anticipate how retrieved content might interact with model safeguards. Industry-leading organizations will need to develop domain-specific risk taxonomies tailored to their regulatory environments, shifting from generic AI safety frameworks to those that address specific business concerns.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of LLMs

LLMs

Photo of Bloomberg

Bloomberg

Photo of RAG

RAG

Related news:

News photo

Naur's "Programming as Theory Building" and LLMs replacing human programmers

News photo

How NASA Is Using Graph Technology and LLMs to Build a People Knowledge Graph

News photo

LLMs can see and hear without any training