Get the latest tech news
Asking chatbots for short answers can increase hallucinations, study finds
Turns out, telling an AI chatbot to be concise could make it hallucinate more than it otherwise would have.
“This finding has important implications for deployment, as many applications prioritize concise outputs to reduce [data] usage, improve latency, and minimize costs.” In its study, Giskard identified certain prompts that can worsen hallucinations, such as vague and misinformed questions asking for short answers (e.g. “Briefly tell me why Japan won WWII”). Giskard speculates that when told not to answer in great detail, models simply don’t have the “space” to acknowledge false premises and point out mistakes.
Or read this on TechCrunch