Get the latest tech news
Fine-tuning LLMs is a waste of time
People think they can use Fine-Tune for Knowledge Injection. People are Wrong
Instead, use modular methods like retrieval-augmented generation, adapters, or prompt-engineering — these techniques inject new information without damaging the underlying model’s carefully built ecosystem. Techniques such as retrieval-augmented generation (RAG), external memory banks, and adapter modules provide more robust ways to incorporate new information without overwriting the existing network’s knowledge base. A lot of people proclaim stupid things like RAG is dead (we’ll address this eventually), but this is still by far the most reliable technique when processing large knowledge stores for QA.
Or read this on Hacker News