Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models

As few as 250 malicious documents can produce a "backdoor" vulnerability in a large AI model, says Anthropic.

Advertisement
Written by Akash Dutta, Edited by Ketan Pratap | Updated: 11 October 2025 13:04 IST
Highlights
  • LLMs can exfiltrate sensitive data when attacker adds a trigger phrase
  • Anthropic says the size of the total dataset does not matter
  • Study breaks the belief that attackers need to control large data portion

UK AI Security Institute and the Alan Turing Institute partnered with Anthropic on this study

Photo Credit: Anthropic

Anthropic, on Thursday, warned developers that even a small data sample contaminated by bad actors can open a backdoor in an artificial intelligence (AI) model. The San Francisco-based AI firm conducted a joint study with the UK AI Security Institute and the Alan Turing Institute to find that the total size of the dataset in a large language model is irrelevant if even a small portion of the dataset is infected by an attacker. The findings challenge the existing belief that attackers need to control a proportionate size of the total dataset in order to create vulnerabilities in a model.

Anthropic's Study Highlights AI Models Can Be Poisoned Relatively Easily

The new study, titled “Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples,” has been published on the online pre-print journal arXiv. Calling it “the largest poisoning investigation to date,” the company claims that just 250 malicious documents in pretraining data can successfully create a backdoor in LLMs ranging from 600M to 13B parameters.

The team focused on a backdoor-style attack that triggers the model to produce gibberish output when encountering a specific hidden trigger token, while otherwise behaving normally, Anthropic explained in a post. They trained models of different parameter sizes, including 600M, 2B, 7B, 13B, on proportionally scaled clean data (Chinchilla-optimal) while injecting 100, 250, or 500 poisoned documents to test vulnerability.

Advertisement

Surprisingly, whether it was a 600M model or a 13B model, the attack success curves were nearly identical for the same number of poisoned documents. The study concludes that model size does not shield against backdoors, and what matters is the absolute number of poisoned points encountered during training.

The researchers further report that while injecting 100 malicious documents was insufficient to reliably backdoor any model, 250 documents or more consistently worked across all sizes. They also varied training volume and random seeds to validate the robustness of the result.

However, the team is cautious: this experiment was constrained to a somewhat narrow denial-of-service (DoS) style backdoor, which causes gibberish output, not more dangerous behaviours such as data leakage, malicious code, or bypassing safety mechanisms. It's still open whether such dynamics hold for more complex, high-stakes backdoors in frontier models.

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. Amazon Diwali Sale 2025: Top Deals on Bestselling Laptops
  2. iQOO 15 Unboxing Leaked Ahead of October 20 China Launch
  1. NASA’s GRACE Satellites Reveal Hidden Deep-Earth Process Behind Gravity Disturbance
  2. Apple Reportedly Planning M5 MacBook Lineup, New Mac Mini and Studio for 2026
  3. Game of Glory Is Streaming Now: Know All About This Abhishek Malhan-Hosted Game Show
  4. iQOO 15 Unboxing Leaked Ahead of October 20 China Launch; Confirms Design and More
  5. Google Is Reportedly Adding Always-On Display Media Controls to Pixel Watch 4
  6. Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models
  7. Honor Magic 8 Pro Confirmed to Get a 200-Megapixel Telephoto Camera and Advanced Image Stabilisation
  8. Slack Opens Platform for Developers to Build AI Apps and Agents on Its Data
  9. Battlefield 6 Does Not Include Content Made by Generative AI, Says EA
  10. Xiaomi 17 Ultra Camera Specifications Leaked; Could Feature 200-Megapixel Rear Camera
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.