Get the latest tech news

OpenAI is launching an ‘independent’ safety board that can stop its model releases


The board was established after a review of OpenAI’s safety processes.

The committee made the recommendation to make the independent board after a recent 90-day review of OpenAI’s “safety and security-related processes and safeguards.” The committee, which is chaired by Zico Kolter and includes Adam D’Angelo, Paul Nakasone, and Nicole Seligman, will “be briefed by company leadership on safety evaluations for major model releases, and will, along with the full board, exercise oversight over model launches, including having the authority to delay a release until safety concerns are addressed,” OpenAI says. The review by OpenAI’s Safety and Security Committee also helped “additional opportunities for industry collaboration and information sharing to advance the security of the AI industry.” The company also says it will look for “more ways to share and explain our safety work” and for “more opportunities for independent testing of our systems.”

Get the Android app

Or read this on The Verge

Read more on:

Photo of OpenAI

OpenAI

Photo of model releases

model releases

Photo of safety board

safety board

Related news:

News photo

AI coding assistant Supermaven raises cash from OpenAI and Perplexity co-founders

News photo

OpenAI Messed With the Wrong Mega-Popular Parenting Forum

News photo

Awesome LLM Strawberry (OpenAI o1)