Get the latest tech news
OpenAI’s latest AI models have a new safeguard to prevent biorisks
Ahead of o3 and o4-mini, OpenAI says it created a new system to block prompts related to risky biological and chemical subject matter.
OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report. It’s designed to identify prompts related to biological and chemical risk and instruct the models to refuse to offer advice on those topics.
Or read this on TechCrunch