Get the latest tech news

Chatbots tell people what they want to hear


A Johns Hopkins-led team found that chatbots reinforce our biases, providing insight into how AI could widen the public divide on controversial issues

/Published May 13, 2024 Chatbots share limited information, reinforce ideologies, and, as a result, can lead to more polarized thinking when it comes to controversial issues, according to new Johns Hopkins University–led research. The study challenges perceptions that chatbots are impartial and provides insight into how using conversational search systems could widen the public divide on hot-button issues and leave people vulnerable to manipulation. "Because people are reading a summary paragraph generated by AI, they think they're getting unbiased, fact-based answers," said lead author Ziang Xiao, an assistant professor of computer science in the Whiting School of Engineering at Johns Hopkins who studies human-AI interactions.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of Chatbots

Chatbots

Photo of people

people

Related news:

News photo

Apple and Google agree on standard to alert people when unknown Bluetooth devices may be tracking them

News photo

OpenAI's Sam Altman Wants AI in the Hands of the People - and Universal Basic Compute?

News photo

The Post Millennial hack leaked data impacting 26 million people