Anthropic Developing Constitutional Classifiers to Safeguard AI Models From Jailbreak Attempts

Anthropic is hosting a temporary live demo version of a Constitutional Classifiers system to let users test its capabilities.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 4 February 2025 13:47 IST
Highlights
  • Constitutional Classifiers act as a layer on top of the AI model
  • Anthropic ran a bug bounty programme to test the system’s robustness
  • Constitutional Classifiers were tested on the Claude 3.5 Sonnet

Jailbreaking of an AI model is done by using unusual prompts to make it generate harmful output

Photo Credit: Anthropic

Anthropic announced the development of a new system on Monday that can protect artificial intelligence (AI) models from jailbreaking attempts. Dubbed Constitutional Classifiers, it is a safeguarding technique that can detect when a jailbreaking attempt is made at the input level and prevent the AI from generating a harmful response as a result of it. The AI firm has tested the robustness of the system via independent jailbreakers and has also opened a temporary live demo of the system to let any interested individual test its capabilities.

Anthropic Unveils Constitutional Classifiers

Jailbreaking in generative AI refers to unusual prompt writing techniques that can force an AI model to not adhere to its training guidelines and generate harmful and inappropriate content. Jailbreaking is not a new thing, and most AI developers implement several safeguards against it within the model. However, since prompt engineers keep creating new techniques, it is difficult to build a large language model (LLM) that is completely protected from such attacks.

Some jailbreaking techniques include extremely long and convoluted prompts that confuse the AI's reasoning capabilities. Others use multiple prompts to break down the safeguards, and some even use unusual capitalisation to break through AI defences.

Advertisement

In a post detailing the research, Anthropic announced that it is developing Constitutional Classifiers as a protective layer for AI models. There are two classifiers — input and output — which are provided with a list of principles to which the model should adhere. This list of principles is called a constitution. Notably, the AI firm already uses constitutions to align the Claude models.

Advertisement

How Constitutional Classifiers work
Photo Credit: Anthropic

Advertisement

 

Now, with Constitutional Classifiers, these principles define the classes of content that are allowed and disallowed. This constitution is used to generate a large number of prompts and model completions from Claude across different content classes. The generated synthetic data is also translated into different languages and transformed into known jailbreaking styles. This way, a large dataset of content is created that can be used to break into a model.

Advertisement

This synthetic data is then used to train the input and output classifiers. Anthropic conducted a bug bounty programme, inviting 183 independent jailbreakers to attempt to bypass Constitutional Classifiers. An in-depth explanation of how the system works is detailed in a research paper published on arXiv. The company claimed no universal jailbreak (one prompt style that works across different content classes) was discovered.

Further, during an automated evaluation test, where the AI firm hit Claude using 10,000 jailbreaking prompts, the success rate was found to be 4.4 percent, as opposed to 86 percent for an unguarded AI model. Anthropic was also able to minimise excessive refusals (refusal of harmless queries) and additional processing power requirements of Constitutional Classifiers.

However, there are certain limitations. Anthropic acknowledged that Constitutional Classifiers might not be able to prevent every universal jailbreak. It could also be less resistant towards new jailbreaking techniques designed specifically to beat the system. Those interested in testing the robustness of the system can find the live demo version here. It will stay active till February 10.

 

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Advertisement
Popular Mobile Brands
  1. Top OTT Releases of the Week: Kantara Chapter 1, Lokah Chapter 1, Idli Kadai, and More
  2. iQOO 15 Indian Variant Allegedly Surfaces on Geekbench Ahead of Launch
  3. Realme GT 8 Pro India Launch Date Leaked: Here's When It Might Arrive
  4. Samsung Might Be Working on a 'More Slim' Version of the Galaxy S25 Edge
  5. Apple CEO Confirms Partnership Plans for AI Services Beyond OpenAI
  1. SpaceX Revises Artemis III Moon Mission with Simplified Starship Design
  2. Rare ‘Second-Generation’ Black Holes Detected, Proving Einstein Right Again
  3. Starlink Hiring for Payments, Tax and Accounting Roles in Bengaluru as Firm Prepares for Launch in India
  4. Google's 'Min Mode' for Always-on Display Mode Spotted in Development on Android 17: Report
  5. OpenAI Upgrades Sora App With Character Cameos, Video Stitching and Leaderboard
  6. Samsung's AI-Powered Priority Notifications Spotted in New One UI 8.5 Leak
  7. Samsung Galaxy S26 Series Could Feature Model Slimmer Than Galaxy S25 Edge With New Name
  8. iQOO 15 Colour Options Confirmed Ahead of November 26 India Launch: Here’s What We Know So Far
  9. Vivo X300 to Be Available in India-Exclusive Red Colourway, Tipster Claims
  10. OpenAI Introduces Aardvark, an Agentic Security Researcher That Can Find and Fix Vulnerabilities
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.