Get the latest tech news
Anthropic users face a new choice – opt out or share your chats for AI training
Anthropic is making some major changes to how it handles user data. Users have until September 28 to take action.
Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google. In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand ” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected. Under the Biden Administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.”
Or read this on TechCrunch