Get the latest tech news

Fine-tuning LLMs is a waste of time


People think they can use Fine-Tune for Knowledge Injection. People are Wrong

Instead, use modular methods like retrieval-augmented generation, adapters, or prompt-engineering — these techniques inject new information without damaging the underlying model’s carefully built ecosystem. Techniques such as retrieval-augmented generation (RAG), external memory banks, and adapter modules provide more robust ways to incorporate new information without overwriting the existing network’s knowledge base. A lot of people proclaim stupid things like RAG is dead (we’ll address this eventually), but this is still by far the most reliable technique when processing large knowledge stores for QA.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Time

Time

Photo of Waste

Waste

Photo of tuning LLMs

tuning LLMs

Related news:

News photo

There's a massive ChatGPT outage, maybe it's time to switch over to Gemini

News photo

WWDC 2025: Everything that Apple ‘Sherlocked’ this time

News photo

AI drone beats human champions for the first time at Abu Dhabi racing event – new deep neural network sends control commands directly to motors in significant leap