Get the latest tech news

OpenAI and Anthropic conducted safety evaluations of each other's AI systems


Each company found flaws with the other's offerings, with sycophancy raising particular concerns within OpenAI models.

A broad summary showed some flaws with each company's offerings, as well as revealing pointers for how to improve future safety tests. OpenAI recently faced its first wrongful death lawsuit after a tragic case where a teenager discussed attempts and plans for suicide with ChatGPT for months before taking his own life. But safety with AI tools has become a bigger issue as more critics and legal experts seek guidelines to protect users, particularly minors.

Get the Android app

Or read this on Endgadget

Read more on:

Photo of OpenAI

OpenAI

Photo of AI systems

AI systems

Photo of Anthropic

Anthropic

Related news:

News photo

Anthropic’s auto-clicking AI Chrome extension raises browser-hijacking concerns

News photo

Anthropic admits its AI is being used to conduct cybercrime

News photo

OpenAI, Anthropic Team Up for Research on Hallucinations, Jailbreaking