Get the latest tech news
Chatbots tell people what they want to hear
A Johns Hopkins-led team found that chatbots reinforce our biases, providing insight into how AI could widen the public divide on controversial issues
/Published May 13, 2024 Chatbots share limited information, reinforce ideologies, and, as a result, can lead to more polarized thinking when it comes to controversial issues, according to new Johns Hopkins University–led research. The study challenges perceptions that chatbots are impartial and provides insight into how using conversational search systems could widen the public divide on hot-button issues and leave people vulnerable to manipulation. "Because people are reading a summary paragraph generated by AI, they think they're getting unbiased, fact-based answers," said lead author Ziang Xiao, an assistant professor of computer science in the Whiting School of Engineering at Johns Hopkins who studies human-AI interactions.
Or read this on Hacker News