Get the latest tech news

Constitutional Classifiers: Defending against universal jailbreaks


A paper from Anthropic describing a new way to guard LLMs against jailbreaking

A prototype version of the method was robust to thousands of hours of human red teaming for universal jailbreaks, albeit with high overrefusal rates and compute overhead. Historically, jailbreaks have proved difficult to detect and block: these kinds of attacks were described over 10 years ago, yet to our knowledge there are still no fully robust deep-learning models in production. In particular, we’re hopeful that a system defended by Constitutional Classifiers could allow us to mitigate jailbreaking risks for models which have passed the CBRN capability threshold outlined in our Responsible Scaling Policy 1.

Get the Android app

Or read this on Hacker News

Read more on:

Photo of universal jailbreaks

universal jailbreaks