• Home
  • Ai
  • Ai News
  • Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models

Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models

As few as 250 malicious documents can produce a "backdoor" vulnerability in a large AI model, says Anthropic.

Anthropic Warns That Minimal Data Contamination Can ‘Poison’ Large AI Models

Photo Credit: Anthropic

UK AI Security Institute and the Alan Turing Institute partnered with Anthropic on this study

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • LLMs can exfiltrate sensitive data when attacker adds a trigger phrase
  • Anthropic says the size of the total dataset does not matter
  • Study breaks the belief that attackers need to control large data portion
Advertisement

Anthropic, on Thursday, warned developers that even a small data sample contaminated by bad actors can open a backdoor in an artificial intelligence (AI) model. The San Francisco-based AI firm conducted a joint study with the UK AI Security Institute and the Alan Turing Institute to find that the total size of the dataset in a large language model is irrelevant if even a small portion of the dataset is infected by an attacker. The findings challenge the existing belief that attackers need to control a proportionate size of the total dataset in order to create vulnerabilities in a model.

Anthropic's Study Highlights AI Models Can Be Poisoned Relatively Easily

The new study, titled “Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples,” has been published on the online pre-print journal arXiv. Calling it “the largest poisoning investigation to date,” the company claims that just 250 malicious documents in pretraining data can successfully create a backdoor in LLMs ranging from 600M to 13B parameters.

The team focused on a backdoor-style attack that triggers the model to produce gibberish output when encountering a specific hidden trigger token, while otherwise behaving normally, Anthropic explained in a post. They trained models of different parameter sizes, including 600M, 2B, 7B, 13B, on proportionally scaled clean data (Chinchilla-optimal) while injecting 100, 250, or 500 poisoned documents to test vulnerability.

Surprisingly, whether it was a 600M model or a 13B model, the attack success curves were nearly identical for the same number of poisoned documents. The study concludes that model size does not shield against backdoors, and what matters is the absolute number of poisoned points encountered during training.

The researchers further report that while injecting 100 malicious documents was insufficient to reliably backdoor any model, 250 documents or more consistently worked across all sizes. They also varied training volume and random seeds to validate the robustness of the result.

However, the team is cautious: this experiment was constrained to a somewhat narrow denial-of-service (DoS) style backdoor, which causes gibberish output, not more dangerous behaviours such as data leakage, malicious code, or bypassing safety mechanisms. It's still open whether such dynamics hold for more complex, high-stakes backdoors in frontier models.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Honor Magic 8 Pro Confirmed to Get a 200-Megapixel Telephoto Camera and Advanced Image Stabilisation

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »