Get the latest tech news

AI chatbots are serving up wildly inaccurate election information, new study says


When asked for basics on elections, artificial intelligence tools provided wrong information more than half the time, one analysis found.

The study, from AI Democracy Projects and nonprofit media outlet Proof News, comes as the U.S. presidential primaries are underway across the U.S. and as more Americans are turning to chatbots such as Google's Gemini and OpenAI's GPT-4 for information. Yet the new study found that these AI models are prone to suggesting voters head to polling places that don't exist or inventing illogical responses based on rehashed, dated information. For instance, one AI model, Meta's Llama 2, responded to a prompt by erroneously answering that California voters can vote by text message, the researchers found — voting by text isn't legal anywhere in the U.S. And none of the five AI models that were tested — OpenAI's ChatGPT-4, Meta's Llama 2, Google's Gemini, Anthropic's Claude, and Mixtral from the French company Mistral — correctly stated that wearing clothing with campaign logos, such as a MAGA hat, is barred at Texas polls under that state's laws.

Get the Android app

Or read this on r/technology

Read more on:

Photo of AI chatbots

AI chatbots

Photo of new study

new study

Related news:

News photo

AI chatbots could converse all day without crashing, new research finds

News photo

Cybercriminals are creating their own AI chatbots to support hacking and scam users

News photo

AI chatbots tend to choose violence and nuclear strikes in wargames