Get the latest tech news
OpenAI training its next major AI model, forms new safety committee
GPT-5 might be farther off than we thought, but OpenAI wants to make sure it is safe.
In this case, "safety" partially means the usual "we won't let the AI go rogue and take over the world," but it also includes a broader set of "processes and safeguards" that the company spelled out in a May 21 safety update related to alignment research, protecting children, upholding election integrity, assessing societal impacts, and implementing security measures. First, it's a reaction to the negative press that came from OpenAI Superalignment team members Ilya Sutskever and Jan Leike resigning two weeks ago. Two major competing models, Anthropic's Claude Opus and Google's Gemini 1.5 Pro, are roughly equivalent to the GPT-4 family in capability despite every competitive incentive to surpass it.
Or read this on Hacker News