Get the latest tech news

OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI


In an update to its Preparedness Framework, OpenAI says it may 'adjust' its safety requirements if a rival lab releases 'high-risk' AI.

“If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements,” wrote OpenAI in a blog post published Tuesday afternoon. According to the Financial Times, OpenAI gave testers less than a week for safety checks for an upcoming major model — a compressed timeline compared to previous releases. “Covered systems that reach high capability must have safeguards that sufficiently minimize the associated risk of severe harm before they are deployed,” wrote OpenAI in its blog post.

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of OpenAI

OpenAI

Photo of risk

risk

Photo of rivals

rivals

Related news:

News photo

OpenAI Working on Social Network With Image Generation Features

News photo

OpenAI hires team behind GV-backed AI eval platform Context.ai

News photo

OpenAI is apparently making a social network