Get the latest tech news

OpenAI’s former superalignment leader blasts company: ‘safety culture and processes have taken a backseat’


Jan Leike has taken to his personal account on X to post a lengthy thread of messages excoriating OpenAI and its leadership.

Join us in returning to NYC on June 5th to collaborate with executive leaders in exploring comprehensive methods for auditing AI models regarding bias, performance, and ethical compliance across diverse organizations. Join us as we return to NYC on June 5th to engage with top executive leaders, delving into strategies for auditing AI models to ensure fairness, optimal performance, and ethical compliance across diverse organizations. All of this was supposedly part of OpenAI’s quest to responsibly develop artificial generalized intelligence (AGI), which it has defined in its company charter as “highly autonomous systems that outperform humans at most economically valuable work.”

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of OpenAI

OpenAI

Photo of processes

processes

Photo of safety culture

safety culture

Related news:

News photo

OpenAI created a team to control ‘superintelligent’ AI — then let it wither, source says

News photo

OpenAI Dissolves High-Profile Safety Team After Chief Scientist Sutskever’s Exit

News photo

OpenAI's Long-Term AI Risk Team Has Disbanded