Get the latest tech news
Chatbots Are Primed to Warp Reality
A growing body of research shows how AI can subtly mislead users—and even implant false memories.
Research on AI-generated misinformation about election procedures published this February found that five well-known large language models provided incorrect answers roughly half the time—for instance, by misstating voter-identification requirements, which could lead to someone’s ballot being refused. “The chatbot outputs often sounded plausible, but were inaccurate in part or full,” Alondra Nelson, a professor at the Institute for Advanced Study who previously served as acting director of the White House Office of Science and Technology Policy, and who co-authored that research, told me. Chatbots could provide an evolution of the push polls that some campaigns have used to influence voters: fake surveys designed to instill negative beliefs about rivals, such as one that asks “What would you think of Joe Biden if I told you he was charged with tax evasion?,” which baselessly associates the president with fraud.
Or read this on Hacker News