Get the latest tech news
Updates to Consumer Terms and Privacy Policy
Anthropic is an AI safety and research company that's working to build reliable, interpretable, and steerable AI systems.
We're now giving users the choice to allow their data to be used to improve Claude and strengthen our safeguards against harmful usage like scams and abuse. By participating, you’ll help us improve model safety, making our systems for detecting harmful content more accurate and less likely to flag harmless conversations. This updated retention length will only apply to new or resumed chats and coding sessions, and will allow us to better support model development and safety improvements.
Or read this on Hacker News