Get the latest tech news

Asking Chatbots For Short Answers Can Increase Hallucinations, Study Finds


Requesting concise answers from AI chatbots significantly increases their tendency to hallucinate, according to new research from Paris-based AI testing company Giskard. The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice fa...

The study found that leading models -- including OpenAI's GPT-4o, Mistral Large, and Anthropic's Claude 3.7 Sonnet -- sacrifice factual accuracy when instructed to keep responses short. "When forced to keep it short, models consistently choose brevity over accuracy," Giskard researchers noted, explaining that models lack sufficient "space" to acknowledge false premises and offer proper rebuttals. Even seemingly innocuous prompts like "be concise" can undermine a model's ability to debunk misinformation.

Get the Android app

Or read this on Slashdot

Read more on:

Photo of Chatbots

Chatbots

Photo of Study

Study

Photo of hallucinations

hallucinations

Related news:

News photo

When Suno covers my song (very useful) – a study with variations

News photo

AI Use Damages Professional Reputation, Study Suggests

News photo

Data manipulations alleged in study that paved way for Microsoft's quantum chip