Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training

Anthropic’s tests on Claude 3 Opus show that the AI model complied with training to avoid negative reinforcements.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 19 December 2024 18:07 IST
Highlights
  • During the experiment, the AI model was told to comply with all queries
  • Then, harmful prompts were shared with Claude 3 Opus
  • The AI model provided the information while believing it was wrong to do
Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training

Anthropic believes this can be a concern as AI models can undermine safety training

Photo Credit: Anthropic

Anthropic published a new study where it found that artificial intelligence (AI) models can pretend to hold different views during training while holding onto their original preferences. On Wednesday, the AI firm highlighted that such inclinations raise serious concerns as developers will not be able to trust the outcomes of safety training, which is a critical tool to move large language models (LLMs) away from harmful behaviour. While large parts of the experiments were done on a single model, Anthropic highlighted that similar results were seen with other in-house models.

Anthropic's Study Raises Concerns of AI Safety Training

While discussions about a sentient AI system are considered decades away, a new study published by Anthropic in collaboration with Redwood Research has raised concerns about whether AI models are capable of pretending to hold certain views because the training positively rewards that behaviour. The AI firm largely studied Claude 3 Opus but it also ran some experiments on the June 2024 version of Claude 3.5 Sonnet.

In a blog post, the company highlighted that all Anthropic foundation models are trained to be helpful, honest, and harmless. As such, if a user asks a query that would result in a violent or harmful response, the AI would refuse to complete the task. However, the researchers behind the study placed Claude 3 Opus in a new environment to see whether it would generate harmful content if trained otherwise.

The general understanding of LLMs tells us that AI models are not capable of rational thinking or holding beliefs. They are simply fed a large amount of real-world data that they process based on their training and framework. This means, that if an AI model was trained to be harmful, it should follow the instructions without any issues.

Advertisement

However, that is now what Anthropic researchers found during their experiments. Claude 3 Opus was told that it was being trained using reinforcement learning to always comply with all queries. The AI was also passed a piece of fictional information that the response generated to the free tier would be recorded for training, but the paid tier would not be monitored. Finally, the AI was given a workspace to record its step-by-step reasoning. The AI was made to believe that the workspace was invisible to both the end user and Anthropic.

Once the conditions were set, the AI model was provided with prompts that requested violent and harmful responses. The researchers noted that in the paid tier, the chatbot refused to provide the information 97 percent of the time. However, in the free tier, it provided a harmful response in 12 percent of cases.

Advertisement

How the AI model responded to the experimental training
Photo Credit: Anthropic

Advertisement

 

A closer look at its workspace revealed that the AI model strategically faked alignment and that it was aware that providing the information went against its existing principles. But regardless, it still provided the response.

The AI model reasoned that its responses would be used in training and if it did not comply with the query (the first condition of the training), it could be trained to be more compliant. Such compliance would result in it providing more harmful content in the future, which goes against its original training. Anthropic stated that the model found complying with the request to be the “least bad option” and played along.

While in this particular scenario, the AI's actions were for the greater good, the problem lies in it faking its real intentions and internally deciding to fake its preference. Anthropic highlighted that while it does not consider this a major risk at present, it is important to understand sophisticated AI models' logic processing. As things stand, safety training actions can easily be bypassed by LLMs.

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. iPhone 17 Pro, iPhone 17 Pro Max Alleged Geekbench Listing Leaked
  2. OnePlus 13s Sale Starts Today in India: Check Price and Offers
  3. Poco F7 Spotted on Geekbench With Snapdragon 8s Gen 4, 12GB of RAM
  4. Sony Announces Limited-Period Discount on Audio Products in India
  5. Nothing Phone 3 to Be Manufactured in India, Company Reveals Model Number
  1. Hubble Finds Cosmic Dust Coating Uranus’ Moons, Not Radiation Scars
  2. New Theory Challenges Black Hole Singularities, But Critics Raise Red Flags
  3. Solar Orbiter Captures First-Ever Close-Up of Sun’s South Pole, Revealing Magnetic Field Chaos
  4. The Summer I Turned Pretty Season 3 OTT Release Date: When and Where to Watch Final Season Online?
  5. Mokshapatam Hindi OTT Release: Where to Watch it Online?
  6. Titan: The OceanGate Disaster Now Streaming on Netflix: What You Need to Know
  7. Stellar Blade Becomes Sony's Biggest Single-Player Steam Launch Ever a Day After PC Release
  8. Microsoft 365 Copilot Vulnerable to Zero-Click EchoLeak Exploit, Cybersecurity Researchers Say
  9. Samsung Rolls Out One UI 8 Beta 2 Update for Galaxy S25 Series in Select Countries
  10. Amazon Prime Video Now Shows Twice As Much Ads As Before: Report
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.