Get the latest tech news
What We Learned from a Year of Building with LLMs
(Part I) To hear directly from the authors on this topic, sign up for the upcoming virtual event on June 20th, and the Generative AI Success Stories Superstream on the O’Reilly Media learning platform. Parts II and III of this series are forthcoming.
While the technology is still rapidly developing, we hope these lessons, the by-product of countless experiments we’ve collectively run, will stand the test of time and help you build and ship robust LLM applications. Practitioners have found RAG effective at providing knowledge and improving output, while requiring far less effort and cost compared to finetuning.RAG is only as good as the retrieved documents’ relevance, density, and detail Reference-free evals are evaluations that don’t rely on a “golden” reference, such as a human-written answer, and can assess the quality of output based solely on the input prompt and the model’s response.
Or read this on Hacker News