Get the latest tech news
OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI
In an update to its Preparedness Framework, OpenAI says it may 'adjust' its safety requirements if a rival lab releases 'high-risk' AI.
“If another frontier AI developer releases a high-risk system without comparable safeguards, we may adjust our requirements,” wrote OpenAI in a blog post published Tuesday afternoon. According to the Financial Times, OpenAI gave testers less than a week for safety checks for an upcoming major model — a compressed timeline compared to previous releases. “Covered systems that reach high capability must have safeguards that sufficiently minimize the associated risk of severe harm before they are deployed,” wrote OpenAI in its blog post.
Or read this on TechCrunch