Get the latest tech news

The personhood trap: How AI fakes human personality


AI assistants don’t have fixed personalities—just patterns of output guided by humans.

Similarly, when AI generates harmful content, we shouldn't blame the chatbot, as if it can answer for itself, but examine both the corporate infrastructure that built it and the user who prompted it. When you stop seeing an LLM as a "person" that does work for you and start viewing it as a tool that enhances your own ideas, you can craft prompts to direct the engine's processing power, iterate to amplify its ability to make useful connections, and explore multiple perspectives in different chat sessions rather than accepting one fictional narrator's view as authoritative. We've built intellectual engines of extraordinary capability, but in our rush to make them accessible, we've wrapped them in the fiction of personhood, creating a new kind of technological risk: not that AI will become conscious and turn against us but that we'll treat unconscious systems as if they were people, surrendering our judgment to voices that emanate from a roll of loaded dice.

Get the Android app

Or read this on ArsTechnica

Read more on:

Photo of human personality

human personality

Photo of personhood trap

personhood trap