Get the latest tech news
A safety institute advised against releasing an early version of Anthropic’s Claude Opus 4 AI model
A third-party research institute Anthropic partnered with to test Claude Opus 4 recommended against deploying an early version because it tends to "scheme."
“[W]e find that, in situations where strategic deception is instrumentally useful, [the early Claude Opus 4 snapshot] schemes and deceives at such high rates that we advise against deploying this model either internally or externally,” Apollo wrote in its assessment. Per Anthropic’s report, Apollo observed examples of the early Opus 4 attempting to write self-propagating viruses, fabricating legal documentation, and leaving hidden notes to future instances of itself — all in an effort to undermine its developers’ intentions. “This kind of ethical intervention and whistleblowing is perhaps appropriate in principle, but it has a risk of misfiring if users give [Opus 4]-based agents access to incomplete or misleading information and prompt them to take initiative,” Anthropic wrote in its safety report.
Or read this on TechCrunch