Get the latest tech news

Anthropic just made it harder for AI to go rogue with its updated safety policy


Anthropic updates its Responsible Scaling Policy, introducing new safety standards and AI capability thresholds to manage risks from powerful AI models like autonomous systems and bioweapons threats.

The company’s decision to formalize Capability Thresholds with corresponding Required Safeguards shows a clear intent to prevent AI models from causing large-scale harm, whether through malicious use or unintended consequences. The tiered ASL system, which ranges from ASL-2 (current safety standards) to ASL-3 (stricter protections for riskier models), creates a structured approach to scaling AI development. In the end, Anthropic’s Responsible Scaling Policy is not just about preventing catastrophe—it’s about ensuring that AI can fulfill its promise of transforming industries and improving lives without leaving destruction in its wake.

Get the Android app

Or read this on Venture Beat

Read more on:

Photo of Anthropic

Anthropic

Photo of rogue

rogue

Related news:

News photo

Anthropic challenges OpenAI with affordable batch processing

News photo

Anthropic Hires OpenAI Co-Founder Durk Kingma

News photo

Anthropic hires OpenAI co-founder Durk Kingma