Get the latest tech news
OpenAI no longer considers manipulation and mass disinformation campaigns a risk worth testing for before releasing its AI models
OpenAI’s new AI safety policy drops pre-release testing requirements for persuasive or manipulative capabilities, sparking concern among experts
Sam Altman, CEO of OpenAI, whose AI agent has set a new standard of performance on Humanity’s Last Exam. Nathan Laine—Bloomberg/Getty Images
Or read this on r/technology