Get the latest tech news

Bad Actors Are Grooming LLMs to Produce Falsehoods


Our research shows that even the latest "reasoning" models are vulnerable

It’s one thing when a chatbot flunks Tower of Hanoi, as Apple notoriously illustrated earlier this month, and another when poor reasoning contributes to the mess of propaganda that threatens to overwhelm the information ecosphere. In February 2025, ASP’s original report on LLM grooming described the apparent attempts of the Pravda network–a centralized collection of websites that spread pro-Russia disinformation–to taint generative models with millions of bogus articles published yearly. For instance, a recent article published on the English-language site of the Pravda network regurgitates antisemitic tropes about “globalists,” falsely claiming that secret societies are somehow ruling the world.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of LLMs

LLMs

Photo of bad actors

bad actors

Photo of falsehoods

falsehoods

Related news:

News photo

Why LLMs Can't Write Q/Kdb+: Writing Code Right-to-Left

News photo

tinymcp: Let LLMs control embedded devices via the Model Context Protocol

News photo

A non-anthropomorphized view of LLMs