Get the latest tech news
DeepMind’s SCoRe shows LLMs can use their internal knowledge to correct their mistakes
Sometimes, the LLMs have the internal knowledge to self-correct their responses. They just need the right training technique to use it.
This is why there is growing interest in enabling LLMs to spot and correct their mistakes, also known as “self-correction.” However, current attempts at self-correction are limited and have requirements that often cannot be met in real-world situations. “Self-correction is a capability that greatly enhances human thinking,” Aviral Kumar, research scientist at Google DeepMind, told VentureBeat. The researchers believe that their work has broader implications for training LLMs and highlights the importance of teaching models how to reason and correct themselves rather than simply mapping inputs to outputs.
Or read this on Venture Beat