Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models

As few as 250 malicious documents can produce a "backdoor" vulnerability in a large AI model, says Anthropic.

Advertisement
Written by Akash Dutta, Edited by Ketan Pratap | Updated: 11 October 2025 13:04 IST
Highlights
  • LLMs can exfiltrate sensitive data when attacker adds a trigger phrase
  • Anthropic says the size of the total dataset does not matter
  • Study breaks the belief that attackers need to control large data portion

UK AI Security Institute and the Alan Turing Institute partnered with Anthropic on this study

Photo Credit: Anthropic

Anthropic, on Thursday, warned developers that even a small data sample contaminated by bad actors can open a backdoor in an artificial intelligence (AI) model. The San Francisco-based AI firm conducted a joint study with the UK AI Security Institute and the Alan Turing Institute to find that the total size of the dataset in a large language model is irrelevant if even a small portion of the dataset is infected by an attacker. The findings challenge the existing belief that attackers need to control a proportionate size of the total dataset in order to create vulnerabilities in a model.

Anthropic's Study Highlights AI Models Can Be Poisoned Relatively Easily

The new study, titled “Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples,” has been published on the online pre-print journal arXiv. Calling it “the largest poisoning investigation to date,” the company claims that just 250 malicious documents in pretraining data can successfully create a backdoor in LLMs ranging from 600M to 13B parameters.

The team focused on a backdoor-style attack that triggers the model to produce gibberish output when encountering a specific hidden trigger token, while otherwise behaving normally, Anthropic explained in a post. They trained models of different parameter sizes, including 600M, 2B, 7B, 13B, on proportionally scaled clean data (Chinchilla-optimal) while injecting 100, 250, or 500 poisoned documents to test vulnerability.

Advertisement

Surprisingly, whether it was a 600M model or a 13B model, the attack success curves were nearly identical for the same number of poisoned documents. The study concludes that model size does not shield against backdoors, and what matters is the absolute number of poisoned points encountered during training.

Advertisement

The researchers further report that while injecting 100 malicious documents was insufficient to reliably backdoor any model, 250 documents or more consistently worked across all sizes. They also varied training volume and random seeds to validate the robustness of the result.

However, the team is cautious: this experiment was constrained to a somewhat narrow denial-of-service (DoS) style backdoor, which causes gibberish output, not more dangerous behaviours such as data leakage, malicious code, or bypassing safety mechanisms. It's still open whether such dynamics hold for more complex, high-stakes backdoors in frontier models.

 

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.

Advertisement

Related Stories

Popular Mobile Brands
  1. Not All The Movies Are The Same: Dual Now Streaming on Lionsgate Play
  2. Infinix Note 60 with Android 16 Spotted on Google Play Console
  1. NASA Spots Giant Antarctic Iceberg Turning Blue as It Nears Breakup
  2. ISRO to Launch PSLV-C62 With EOS-N1 Hyperspectral Satellite on January 12
  3. Astronomers Discover Shockingly Hot Young Galaxy Cluster That Defies Theory
  4. Hubble Telescope Spots Starless Dark Matter Cloud Cloud 9, Opening Window Into Dark Universe
  5. Devkhel OTT Release: Mythology-Based Mystery Series Coming Soon on Z5
  6. Not All The Movies Are The Same: Dual Now Streaming on Lionsgate Play
  7. Scum of the Brave Now Available for Streaming on Crunchyroll: Everything You Need to Know
  8. The Thing With Feathers Now Streaming Online: What You Need to Know About Benedict Cumberbatch Starrer Movie
  9. Infinix Note 60 with Android 16 Spotted on Google Play Console
  10. WhatsApp Might Soon Let You Set a Profile Cover Photo on iOS
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.