Get the latest tech news
OpenAI trained o1 and o3 to ‘think’ about its safety policy
OpenAI announced a new family of AI reasoning models on Friday, o3, which the startup claims to be more advanced than o1 or anything else it's released.
But at the same time, it’s more controversial: David Sacks, Elon Musk, and Marc Andreessen say some AI safety measures are actually “censorship,” highlighting the subjective nature in these decisions. Here’s how o1 and o3 works, in simple terms: After a user presses enter on a prompt in ChatGPT, OpenAI’s reasoning models take anywhere from 5 seconds to a few minutes to re-prompt themselves with followup questions. The reason OpenAI did this was because asking o1 to read through the company’s entire safety policy – which is quite a long document – was creating high latency and unnecessarily expensive compute costs.
Or read this on TechCrunch