Get the latest tech news

We tested 20 LLMs for ideological bias, revealing distinct alignments


As more and more of us use Large Language Models (LLMs) for daily tasks, their potential biases become increasingly important. We investigated whether today's leading models, such as those from OpenAI, Google, and others, exhibit ideological leanings.

None

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of ideological bias

ideological bias

Photo of distinct alignments

distinct alignments

Related news:

News photo

Karpathy on DeepSeek-OCR paper: Are pixels better inputs to LLMs than text?

News photo

LLMs can get "brain rot"

News photo

Neural audio codecs: how to get audio into LLMs