Get the latest tech news
Why it’s a mistake to ask chatbots about their mistakes
The tendency to ask AI bots to explain themselves reveals widespread misconceptions about how they work.
In the case of Grok above, the chatbot's main source for an answer like this would probably originate from conflicting reports it found in a search of recent social media posts (using an external tool to retrieve that information), rather than any kind of self-knowledge as you might expect from a human with the power of speech. Similarly, research on "Recursive Introspection" found that without external feedback, attempts at self-correction actually degraded model performance—the AI's self-assessment made things worse, not better. When Lemkin asked Replit whether rollbacks were possible after a database deletion, his concerned framing likely prompted a response that matched that concern—generating an explanation for why recovery might be impossible rather than accurately assessing actual system capabilities.
Or read this on ArsTechnica