Get the latest tech news

Meta’s Self-Taught Evaluator enables LLMs to create their own training data


Researchers at Meta have released Self-Taught Evaluator, a technique that enables LLMs to automatically label their training data.

Human evaluation has been the gold standard for assessing the quality and accuracy of large language models (LLMs), especially for open-ended tasks such as creative writing and coding. These techniques can significantly reduce the manual effort required to create high-performing LLMs, paving the way for more efficient and scalable development and deployment of AI-powered applications. At the same time, fully automated loops that rely solely on LLMs to self-evaluate their own outputs can fall on meaningless shortcuts that optimize the model for a benchmark but fail on real-world tasks.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of LLMs

LLMs

Photo of training data

training data

Related news:

News photo

Markov chains are funnier than LLMs

News photo

LLMs develop their own understanding of reality as their language abilities improve | In controlled experiments, MIT CSAIL researchers discover simulations of reality developing deep within LLMs, indicating an understanding of language beyond simple mimicry.

News photo

LLMs excel at inductive reasoning but struggle with deductive tasks, new research shows