Get the latest tech news

OpenAI no longer considers manipulation and mass disinformation campaigns a risk worth testing for before releasing its AI models


OpenAI’s new AI safety policy drops pre-release testing requirements for persuasive or manipulative capabilities, sparking concern among experts

Sam Altman, CEO of OpenAI, whose AI agent has set a new standard of performance on Humanity’s Last Exam. Nathan Laine—Bloomberg/Getty Images

Get the Android app

Or read this on r/technology

Read more on:

Photo of OpenAI

OpenAI

Photo of risk

risk

Photo of AI models

AI models

Related news:

News photo

OpenAI’s Stargate project sets its sights on international expansion

News photo

'Why would he take such a risk?' My censor and me

News photo

OpenAI releases new simulated reasoning models with full tool access