Get the latest tech news

Google Deepmind proposes ‘self-discover’ framework for LLMs, improves GPT-4 performance


In 21 out of 25 tasks, the self-discover LLM framework was found to be outperforming chain-of-thought reasoning and other techniques with performance gains of up to 32%.

Published on arXiV and Hugging Face this morning, the approach goes beyond existing prompting techniques used by LLMs and has been found capable of improving the performance of known models out there, including OpenAI’s GPT-4 and Google’s PaLM 2. To make this happen, the models, powered by transformer architecture, use various prompting techniques inspired by cognitive theories of how humans reason and solve problems. While the idea of a self-discover prompting framework has just been proposed, it has the potential to push the boundary of problem-solving and give LLMs the ability to address challenging problems with ease – ultimately moving toward the goal of general intelligence.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of GPT-4

GPT-4

Photo of LLMs

LLMs

Photo of Framework

Framework

Related news:

News photo

Microsoft CEO Nadella Taunts AI Rivals: Even With All the Hoopla, GPT-4 Remains the Best

News photo

DeepMind’s GenEM uses LLMs to generate expressive behaviors for robots

News photo

Researchers swerved GPT-4's safety guardrails and made the chatbot detail how to make explosives in Scots Gaelic