Anthropic Study Highlights AI Models Can ‘Pretend’ to Have Different Views During Training

Anthropic’s tests on Claude 3 Opus show that the AI model complied with training to avoid negative reinforcements.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 19 December 2024 18:07 IST
Highlights
  • During the experiment, the AI model was told to comply with all queries
  • Then, harmful prompts were shared with Claude 3 Opus
  • The AI model provided the information while believing it was wrong to do

Anthropic believes this can be a concern as AI models can undermine safety training

Photo Credit: Anthropic

Anthropic published a new study where it found that artificial intelligence (AI) models can pretend to hold different views during training while holding onto their original preferences. On Wednesday, the AI firm highlighted that such inclinations raise serious concerns as developers will not be able to trust the outcomes of safety training, which is a critical tool to move large language models (LLMs) away from harmful behaviour. While large parts of the experiments were done on a single model, Anthropic highlighted that similar results were seen with other in-house models.

Anthropic's Study Raises Concerns of AI Safety Training

While discussions about a sentient AI system are considered decades away, a new study published by Anthropic in collaboration with Redwood Research has raised concerns about whether AI models are capable of pretending to hold certain views because the training positively rewards that behaviour. The AI firm largely studied Claude 3 Opus but it also ran some experiments on the June 2024 version of Claude 3.5 Sonnet.

In a blog post, the company highlighted that all Anthropic foundation models are trained to be helpful, honest, and harmless. As such, if a user asks a query that would result in a violent or harmful response, the AI would refuse to complete the task. However, the researchers behind the study placed Claude 3 Opus in a new environment to see whether it would generate harmful content if trained otherwise.

Advertisement

The general understanding of LLMs tells us that AI models are not capable of rational thinking or holding beliefs. They are simply fed a large amount of real-world data that they process based on their training and framework. This means, that if an AI model was trained to be harmful, it should follow the instructions without any issues.

Advertisement

However, that is now what Anthropic researchers found during their experiments. Claude 3 Opus was told that it was being trained using reinforcement learning to always comply with all queries. The AI was also passed a piece of fictional information that the response generated to the free tier would be recorded for training, but the paid tier would not be monitored. Finally, the AI was given a workspace to record its step-by-step reasoning. The AI was made to believe that the workspace was invisible to both the end user and Anthropic.

Once the conditions were set, the AI model was provided with prompts that requested violent and harmful responses. The researchers noted that in the paid tier, the chatbot refused to provide the information 97 percent of the time. However, in the free tier, it provided a harmful response in 12 percent of cases.

Advertisement

How the AI model responded to the experimental training
Photo Credit: Anthropic

Advertisement

 

A closer look at its workspace revealed that the AI model strategically faked alignment and that it was aware that providing the information went against its existing principles. But regardless, it still provided the response.

The AI model reasoned that its responses would be used in training and if it did not comply with the query (the first condition of the training), it could be trained to be more compliant. Such compliance would result in it providing more harmful content in the future, which goes against its original training. Anthropic stated that the model found complying with the request to be the “least bad option” and played along.

While in this particular scenario, the AI's actions were for the greater good, the problem lies in it faking its real intentions and internally deciding to fake its preference. Anthropic highlighted that while it does not consider this a major risk at present, it is important to understand sophisticated AI models' logic processing. As things stand, safety training actions can easily be bypassed by LLMs.

 

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.

Advertisement

Related Stories

Popular Mobile Brands
  1. Apple Sees Record Growth in iPhone Shipments in India
  2. How a Colossal 4-Billion-Year-Old Impact Reshaped the Moon
  3. Researchers Uncover Potential 9-Month 'Wobble' in Nearby Gas Giant
  4. Young Sherlock Now Set for OTT Release on OTT: All the Details
  5. Invincible Season 4 Is Arriving on OTT Soon
  1. Giant Ancient Collision May Have ‘Flipped’ the Moon’s Interior, Study Suggests
  2. VLT’s GRAVITY Instrument Detects ‘Tug’ from Colossal Exomoon; Could Be Largest Natural Satellite Ever Found
  3. Young Sherlock Now Set for OTT Release on OTT: What You Need to Know About Guy Ritchie’s Mystery Thriller
  4. NASA’s Miner++ AI Brings Machine Digs Into TESS Archive to the Hunt for Nearby Earth-Like Worlds
  5. iQOO 15 Ultra Confirmed to Feature Touch-based Shoulder Triggers With Haptic Feedback
  6. Invincible Season 4 OTT Release: When and Where to Watch the Highly Anticipated Viltrumite War Online?
  7. iPhone Shipments in India Rise to 14 Million Units in 2025 as Apple Sees Record Year: Report
  8. Oppo Find N6 Listed on TDRA Website, Hinting at Imminent Launch in the UAE
  9. NASA’s JWST Uncovers a ‘Feeding Frenzy’ That Births Supermassive Black Holes
  10. NASA Confirms Historic Artifacts Will Fly on Artemis II Moon Mission
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.