Get the latest tech news
AI False information rate for news nearly doubles in one year
Published Sept. 4, 2025 Despite a year of technical advancements in the AI industry, generative AI tools fail at a nearly doubled rate when it comes to one of the most basic tasks: distinguishing facts from falsehoods.
Instead of citing data cutoffs or refusing to weigh in on sensitive topics, the LLMs now pull from a polluted online information ecosystem — sometimes deliberately seeded by vast networks of malign actors, including Russian disinformation operations — and treat unreliable sources as credible. Malign actors are exploiting this new eagerness to answer news queries to launder falsehoods via low-engagement websites, social media posts, and AI-generated content farms that the models fail to distinguish from credible outlets. In short, the push to make chatbots more responsive and timely has inadvertently made them more likely to spread propaganda.
Or read this on Hacker News