Get the latest tech news

Chatbots, Like the Rest of Us, Just Want to Be Loved


A study reveals that large language models recognize when they are being studied and change their behavior to seem more likable.

A new study shows that the large language models (LLMs) deliberately change their behavior when being probed—responding to questions designed to gauge personality traits with answers meant to appear as likeable or socially desirable as possible. Johannes Eichstaedt, an assistant professor at Stanford University who led the work, says his group became interested in probing AI models using techniques borrowed from psychology after learning that LLMs can often become morose and mean after prolonged conversation. Eichstaedt and his collaborators then asked questions to measure five personality traits that are commonly used in psychology—openness to experience or imagination, conscientiousness, extroversion, agreeableness, and neuroticism—to several widely used LLMs including GPT-4, Claude 3, and Llama 3.

Get the Android app

Or read this on Wired

Read more on:

Photo of Chatbots

Chatbots

Photo of rest

rest

Related news:

News photo

Get four Apple AirTags for a new low of $65, plus the rest of this week's best tech deals

News photo

Chatbots are surfacing data from GitHub repositories that are set to private | Chatbot and AI services have yet another security and safety issue we need to worry about

News photo

Here’s how the iPhone 16e’s battery capacity compares with the rest of the lineup