Get the latest tech news
OpenAI promised to make its AI safe. Employees say it 'failed' its first test
Employees said OpenAI rushed the release of its latest AI model, called GPT-4 Omni.
Last summer, artificial intelligence powerhouse OpenAI promised the White House it would rigorously safety test new versions of its groundbreaking technology to make sure the AI wouldn’t inflict damage — like teaching users to build bioweapons or helping hackers develop new kinds of cyberattacks. In June, several current and former OpenAI employees signed a cryptic open letter demanding that AI companies exempt their workers from confidentiality agreements, freeing them to warn regulators and the public about safety risks of the technology. OpenAI announced the preparedness initiative as an attempt to bring scientific rigor to the study of catastrophic risks, which it defined as incidents “which could result in hundreds of billions of dollars in economic damage or lead to the severe harm or death of many individuals.”
Or read this on r/technology