Get the latest tech news

Clio: A system for privacy-preserving insights into real-world AI use


A blog post describing Anthropic’s new system, Clio, for analyzing how people use AI while maintaining their privacy

In addition to training our language models to refuse harmful requests, we also use dedicated Trust and Safety enforcement systems to detect, block, and take action on activity that might violate our Usage Policy. For example, at this time we don't use Clio’s outputs for automated enforcement actions, and we extensively validate its performance across different data distributions—including testing across multiple languages, as we detail in our paper. And as we noted above, there are instances where Clio identified false positives (where it appeared there was activity violating our usage policy where there wasn’t) in our standard safety classifiers, potentially allowing us to interfere less in legitimate uses of the model.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of System

System

Photo of privacy

privacy

Photo of Clio

Clio

Related news:

News photo

Encryption is non-negotiable: open letter to EU to not undermine privacy.

News photo

A new benchmark for AI investment: Swift Ventures unveils system to separate talk from action

News photo

Murdered Insurance CEO Implemented AI System to Automatically Deny Benefits for Sick Patients