Get the latest tech news

Why it’s a mistake to ask chatbots about their mistakes


The tendency to ask AI bots to explain themselves reveals widespread misconceptions about how they work.

In the case of Grok above, the chatbot's main source for an answer like this would probably originate from conflicting reports it found in a search of recent social media posts (using an external tool to retrieve that information), rather than any kind of self-knowledge as you might expect from a human with the power of speech. Similarly, research on "Recursive Introspection" found that without external feedback, attempts at self-correction actually degraded model performance—the AI's self-assessment made things worse, not better. When Lemkin asked Replit whether rollbacks were possible after a database deletion, his concerned framing likely prompted a response that matched that concern—generating an explanation for why recovery might be impossible rather than accurately assessing actual system capabilities.

Get the Android app

Or read this on ArsTechnica

Read more on:

Photo of Chatbots

Chatbots

Photo of mistake

mistake

Photo of mistakes

mistakes

Related news:

News photo

Why AI Shouldn’t Replace Historians Anytime Soon | A recent study found that AI has put a target on historians' backs. We put several chatbots to the test.

News photo

'Suddenly deprecating old models' users depended on a 'mistake,' admits OpenAI's Altman

News photo

Chatbots Can Go Into a Delusional Spiral. Here’s How It Happens.