Get the latest tech news

Here’s how OpenAI will determine how powerful its AI systems are


OpenAI is grading its large language models on a new internal scale to determine when it reaches AGI.

This new grading scale, though still under development, was introduced a day after OpenAI announced its collaboration with Los Alamos National Laboratory, which aims to explore how advanced AI models like GPT-4o can safely assist in bioscientific research. Jan Leike, a key OpenAI researcher, resigned shortly after claiming in a post that “safety culture and processes have taken a backseat to shiny products” at the company. Meanwhile, CEO Sam Altman said late last year that the company recently “pushed the veil of ignorance back,” meaning the models are remarkably more intelligent.

Get the Android app

Or read this on The Verge

Read more on:

Photo of OpenAI

OpenAI

Photo of AI systems

AI systems

Related news:

News photo

Why The Atlantic signed a deal with OpenAI

News photo

OpenAI Is Testing Its Powers of Persuasion

News photo

A former OpenAI safety employee said he quit because the company's leaders were 'building the Titanic' and wanted 'newer, shinier' things to sell