Get the latest tech news

OpenAI promised to make its AI safe. Employees say it 'failed' its first test


Employees said OpenAI rushed the release of its latest AI model, called GPT-4 Omni.

Last summer, artificial intelligence powerhouse OpenAI promised the White House it would rigorously safety test new versions of its groundbreaking technology to make sure the AI wouldn’t inflict damage — like teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks. In June, several current and former OpenAI employees signed a cryptic open letter demanding that AI companies exempt their workers from confidentiality agreements, freeing them to warn regulators and the public about safety risks of the technology. OpenAI announced the preparedness initiative as an attempt to bring scientific rigor to the study of catastrophic risks, which it defined as incidents “which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals.”

Get the Android app

Or read this on r/technology

Read more on:

Photo of OpenAI

OpenAI

Photo of test

test

Photo of employees

employees

Related news:

News photo

ChatGPT maker OpenAI now has a scale to rank its AI

News photo

OpenAI anticipates decrease in AI model costs amid adoption surge

News photo

Amazon Says It Now Runs On 100% Clean Power. Employees Say It's More Like 22%