Get the latest tech news
Anthropic's Claude AI now has the ability to end 'distressing' conversations
A new feature with Claude Opus 4 and 4.1 lets it end conversations with users with "persistently harmful or abusive interactions."
To clarify, Anthropic said those two Claude models could exit harmful conversations, like "requests from users for sexual content involving minors and attempts to solicit information that would enable large-scale violence or acts of terror." With Claude Opus 4 and 4.1, these models will only end a conversation "as a last resort when multiple attempts at redirection have failed and hope of a productive interaction has been exhausted," according to Anthropic. However, Anthropic claims most users won't experience Claude cutting a conversation short, even when talking about highly controversial topics, since this feature will be reserved for "extreme edge cases."
Or read this on Endgadget