Get the latest tech news

Anthropic users face a new choice – opt out or share your chats for AI training


Anthropic is making some major changes to how it handles user data. Users have until September 28 to take action.

Training AI models requires vast amounts of high-quality conversational data, and accessing millions of Claude interactions should provide exactly the kind of real-world content that can improve Anthropic’s competitive positioning against rivals like OpenAI and Google. In June, OpenAI COO Brad Lightcap called this “a sweeping and unnecessary demand ” that “fundamentally conflicts with the privacy commitments we have made to our users.” The court order affects ChatGPT Free, Plus, Pro, and Team users, though enterprise customers and those with Zero Data Retention agreements are still protected. Under the Biden Administration, the Federal Trade Commission even stepped in, warning that AI companies risk enforcement action if they engage in “surreptitiously changing its terms of service or privacy policy, or burying a disclosure behind hyperlinks, in legalese, or in fine print.”

Get the Android app

Or read this on TechCrunch

Read more on:

Photo of data

data

Photo of AI training

AI training

Photo of Anthropic users

Anthropic users

Related news:

News photo

TransUnion Says Hackers Accessed 4.4 Million Customers’ Data

News photo

MATLAB dev says ransomware gang stole data of 10,000 people

News photo

TransUnion suffers data breach impacting over 4.4 million people