Get the latest tech news
OpenAI says its latest GPT-4o model is ‘medium’ risk
OpenAI’s latest safety assessment for GPT-4o determined that the latest model is “medium” risk.
They examined risks like the possibility that GPT-4o would create unauthorized clones of someone’s voice, erotic and violent content, or chunks of reproduced copyrighted audio. There’s a clear potential risk of the model accidentally spreading misinformation or getting hijacked by malicious actors — even if OpenAI is hoping to highlight that the company is testing real-world scenarios to prevent misuse. But the biggest takeaway from the GPT-4o System Card is that, despite the group of external red teamers and testers, a lot of this relies on OpenAI to evaluate itself.
Or read this on The Verge