Get the latest tech news

Stop Pretending LLMs Have Feelings Media's Dangerous AI Anthropomorphism Problem


When AI causes harm, headlines blame the bot instead of the billion-dollar companies that built them. This anthropomorphic coverage is tech journalism at its worst.

When the chatbot (codenamed Sydney) generated concerning responses during “conversations” with New York Times columnist Kevin Roose, the story became about a lovelorn AI rather than Microsoft's failure to properly test their product before release. When a car's brakes fail, we don't write headlines saying “Toyota Camry apologizes for crash.” We investigate the manufacturer's quality control, engineering decisions, and safety testing. The coverage suggests it's the chatbot’s fault, as if these systems spontaneously generated themselves rather than being deliberately built, trained, and deployed by companies making calculated risk assessments.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of media

media

Photo of LLMs

LLMs

Photo of feelings

feelings

Related news:

News photo

Is DeepSeek a New Voice Among LLMs in Public Opinion Simulation?

News photo

Coding with LLMs in the summer of 2025 – an update

News photo

Enhancing COBOL Code Explanations: A Multi-Agents Approach Using LLMs