Get the latest tech news

Why You Can’t Trust a Chatbot to Talk About Itself


Anytime you expect AI to be self-aware, you’re in for disappointment. That’s just not how it works.

There is no consistent “ChatGPT” to interrogate about its mistakes, no singular “Grok” entity that can tell you why it failed, no fixed “Replit” persona that knows whether database rollbacks are possible. Once an AI language model is trained (which is a laborious, energy-intensive process), its foundational “knowledge” about the world is baked into its neural network and is rarely modified. In the case of Grok above, the chatbot's main source for an answer like this would probably originate from conflicting reports it found in a search of recent social media posts (using an external tool to retrieve that information), rather than any kind of self-knowledge as you might expect from a human with the power of speech.

Get the Android app

Or read this on Wired

Read more on:

Photo of chatbot

chatbot

Related news:

News photo

Google Finance redesign goes all in on AI, complete with a chatbot

News photo

Grok 4 is less chatbot, more Elon Musk megaphone, users claim

News photo

Grok team apologizes for the chatbot's 'horrific behavior' and blames 'MechaHitler' on a bad update