Get the latest tech news

Google study shows LLMs abandon correct answers under pressure, threatening multi-turn AI systems


A DeepMind study finds LLMs are both stubborn and easily swayed. This confidence paradox has key implications for building AI applications.

A new study by researchers at Google DeepMind and University College London reveals how large language models (LLMs) form, maintain and lose confidence in their answers. Join leaders from Block, GSK, and SAP for an exclusive look at how autonomous agents are reshaping enterprise workflows — from real-time decision-making to end-to-end automation. This unique setup, impossible to replicate with human participants who can’t simply forget their prior choices, allowed the researchers to isolate how memory of a past decision influences current confidence.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Google

Google

Photo of LLMs

LLMs

Photo of pressure

pressure

Related news:

News photo

Google Pixel 10 Pro Fold specs leak with bigger screen and battery

News photo

Google to Unveil $25B AI Infrastructure Investment at Pennsylvania Energy Summit

News photo

US Defense Department Awards Contracts To Google, xAI