Get the latest tech news

AI Chatbots Have a Political Bias That Could Unknowingly Influence Society


Artificial intelligence engines powered by Large Language Models (LLMs) are becoming an increasingly accessible way of getting answers and advice, in spite of known racial and gender biases.

A new study has uncovered strong evidence that we can now add political bias to that list, further demonstrating the potential of the emerging technology to unwittingly and perhaps even nefariously influence society's values and attitudes. Further tests on custom bots – where users can fine-tune the LLMs training data – showed that these AIs could be influenced to express political leanings using left-of-center or right-of-center texts. "It is crucial to critically examine and address the potential political biases embedded in LLMs to ensure a balanced, fair, and accurate representation of information in their responses to user queries," writes Rozado.

Get the Android app

Or read this on r/technology

Read more on:

Photo of AI chatbots

AI chatbots

Photo of society

society

Photo of political bias

political bias

Related news:

News photo

Optica CEO Departs Amid Probes Into Society’s Links to Huawei

News photo

Cops are using AI chatbots to write crime reports. Will they hold up in court?

News photo

Large language models pose a risk to society and need tighter regulation, say researchers