Get the latest tech news

OpenAI says its latest GPT-4o model is ‘medium’ risk


OpenAI’s latest safety assessment for GPT-4o determined that the latest model is “medium” risk.

They examined risks like the possibility that GPT-4o would create unauthorized clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio. There’s a clear potential risk of the model accidentally spreading misinformation or getting hijacked by malicious actors — even if OpenAI is hoping to highlight that the company is testing real-world scenarios to prevent misuse. But the biggest takeaway from the GPT-4o System Card is that, despite the group of external red teamers and testers, a lot of this relies on OpenAI to evaluate itself.

Get the Android app

Or read this on The Verge

Read more on:

Photo of OpenAI

OpenAI

Photo of risk

risk

Photo of latest GPT-4o model

latest GPT-4o model

Related news:

News photo

OpenAI finds that GPT-4o does some truly bizarre stuff sometimes

News photo

OpenAI adds a Carnegie Mellon professor to its board of directors

News photo

Democrats push Sam Altman on OpenAI’s safety record