Get the latest tech news

Anthropic's Claude AI now has the ability to end 'distressing' conversations


A new feature with Claude Opus 4 and 4.1 lets it end conversations with users with "persistently harmful or abusive interactions."

To clarify, Anthropic said those two Claude models could exit harmful conversations, like "requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror." With Claude Opus 4 and 4.1, these models will only end a conversation "as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted," according to Anthropic. However, Anthropic claims most users won't experience Claude cutting a conversation short, even when talking about highly controversial topics, since this feature will be reserved for "extreme edge cases."

Get the Android app

Or read this on Endgadget

Read more on:

Photo of Claude AI

Claude AI

Photo of ability

ability

Photo of Anthropic

Anthropic

Related news:

News photo

Anthropic: Claude can now end conversations to prevent harmful uses

News photo

Anthropic says some Claude models can now end ‘harmful or abusive’ conversations

News photo

Anthropic's CEO says in 3-6 months, AI will write 90% of the code (March 2025)