Get the latest tech news
With Quiet-STaR, language models learn to think before speaking
Researchers have taught AI models to think before responding to prompts — just as (most) people consider what to say before speaking.
The researchers have introduced Quiet-STaR — an extension of the Self-Taught Reasoner(STaR) model — which is trained on a wide corpus of internet data and learns to generate rationales at each token to explain future text and improve predictions. They add that, “by training on the rich spectrum of reasoning tasks implicit in diverse web text, rather than narrowly specializing for particular datasets, Quiet-STaR points the way to more robust and adaptable language models.” To help reduce variance, researchers also introduced a “teacher forcing” trick, which ensures that neural networks stay as close as possible to ground truth sequences.
Or read this on Venture Beat