Get the latest tech news

OpenAI confirms new frontier models o3 and o3-mini


OpenAI has just confirmed that is releasing a new reasoning model named o3 and o3 mini, a successor to the o1 and o1 mini models.

Deliberative alignment improves upon previous methods like Reinforcement Learning from Human Feedback (RLHF) and Constitutional AI, which rely on safety specifications only for label generation rather than embedding the policies directly into the models. Results shared by OpenAI researchers in a new, non peer-reviewed paper indicate that this method enhances performance on safety benchmarks, reduces harmful outputs, and ensures better adherence to content and style guidelines. Applicants have to fill out an online form that asks them for a variety of different pieces of information, including links to prior published papers and their repositories of code on Github, and select which of the models — o3 or o3-mini — they wish to test, as well as what they plan to use them for.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of OpenAI

OpenAI

Photo of mini

mini

Photo of new frontier models

new frontier models

Related news:

News photo

OpenAI teases new reasoning model—but don’t expect to try it soon

News photo

OpenAI announces new o3 models

News photo

OpenAI 2024 event: How to watch new ChatGPT product reveals and demos