Get the latest tech news

OpenAI training its next major AI model, forms new safety committee


GPT-5 might be farther off than we thought, but OpenAI wants to make sure it is safe.

In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures. First, it's a reaction to the negative press that came from OpenAI Superalignment team members Ilya Sutskever and Jan Leike resigning two weeks ago. Two major competing models, Anthropic's Claude Opus and Google's Gemini 1.5 Pro, are roughly equivalent to the GPT-4 family in capability despite every competitive incentive to surpass it.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of OpenAI

OpenAI

Photo of new safety committee

new safety committee

Photo of major AI model

major AI model

Related news:

News photo

OpenAI CEO Sam Altman was fired for outright lying, says former board member

News photo

PwC strikes OpenAI deal to become the first reseller of ChatGPT Enterprise

News photo

OpenAI signs 100K PwC workers to ChatGPT’s enterprise tier as PwC becomes its first resale partner