Get the latest tech news
Google Deepmind proposes ‘self-discover’ framework for LLMs, improves GPT-4 performance
In 21 out of 25 tasks, the self-discover LLM framework was found to be outperforming chain-of-thought reasoning and other techniques with performance gains of up to 32%.
Published on arXiV and Hugging Face this morning, the approach goes beyond existing prompting techniques used by LLMs and has been found capable of improving the performance of known models out there, including OpenAI’s GPT-4 and Google’s PaLM 2. To make this happen, the models, powered by transformer architecture, use various prompting techniques inspired by cognitive theories of how humans reason and solve problems. While the idea of a self-discover prompting framework has just been proposed, it has the potential to push the boundary of problem-solving and give LLMs the ability to address challenging problems with ease – ultimately moving toward the goal of general intelligence.
Or read this on Venture Beat