Get the latest tech news
Meta researchers distill System 2 thinking into LLMs, improving performance on complex reasoning
A technique by researchers at Meta FAIR distills System 2 thinking into LLMs so that they can perform reasoning tasks with less resources
Large language models (LLMs) are very good at answering simple questions but require special prompting techniques to handle complex tasks that need reasoning and planning. In recent years, AI researchers have shown that LLMs can be made to mimic System 2 thinking by prompting them to generate intermediate reasoning steps before providing their final answer. “Many of these methods are shown to produce more accurate results due to this explicit reasoning, but typically do so at much higher inference cost and latency for a response,” the Meta AI researchers write.
Or read this on Venture Beat