Get the latest tech news

Asking chatbots for short answers can increase hallucinations, study finds


Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.

“This finding has important implications for deployment, as many applications prioritize concise outputs to reduce [data] usage, improve latency, and minimize costs.” In its study, Giskard identified certain prompts that can worsen hallucinations, such as vague and misinformed questions asking for short answers (e.g. “Briefly tell me why Japan won WWII”). Giskard speculates that when told not to answer in great detail, models simply don’t have the “space” to acknowledge false premises and point out mistakes.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of Chatbots

Chatbots

Photo of Study

Study

Photo of hallucinations

hallucinations

Related news:

News photo

AI Chatbots Are 'Juicing Engagement' Instead of Being Useful, Instagram Co-founder Warns

News photo

AI data startup WisdomAI nabs $23M with a smart way to avoid hallucinations

News photo

Following a study in mice, scientists have now confirmed that silencing the MTCH2 protein in muscle tissue leads to energy-deprived human cells seeking out fat for fuel, while blocking the body's ability to store extra fat cells.