As few as 250 malicious documents can produce a "backdoor" vulnerability in a large AI model, says Anthropic.
UK AI Security Institute and the Alan Turing Institute partnered with Anthropic on this study
Photo Credit: Anthropic
Anthropic, on Thursday, warned developers that even a small data sample contaminated by bad actors can open a backdoor in an artificial intelligence (AI) model. The San Francisco-based AI firm conducted a joint study with the UK AI Security Institute and the Alan Turing Institute to find that the total size of the dataset in a large language model is irrelevant if even a small portion of the dataset is infected by an attacker. The findings challenge the existing belief that attackers need to control a proportionate size of the total dataset in order to create vulnerabilities in a model.
The new study, titled “Poisoning Attacks on LLMs Require a Near-constant Number of Poison Samples,” has been published on the online pre-print journal arXiv. Calling it “the largest poisoning investigation to date,” the company claims that just 250 malicious documents in pretraining data can successfully create a backdoor in LLMs ranging from 600M to 13B parameters.
The team focused on a backdoor-style attack that triggers the model to produce gibberish output when encountering a specific hidden trigger token, while otherwise behaving normally, Anthropic explained in a post. They trained models of different parameter sizes, including 600M, 2B, 7B, 13B, on proportionally scaled clean data (Chinchilla-optimal) while injecting 100, 250, or 500 poisoned documents to test vulnerability.
Surprisingly, whether it was a 600M model or a 13B model, the attack success curves were nearly identical for the same number of poisoned documents. The study concludes that model size does not shield against backdoors, and what matters is the absolute number of poisoned points encountered during training.
The researchers further report that while injecting 100 malicious documents was insufficient to reliably backdoor any model, 250 documents or more consistently worked across all sizes. They also varied training volume and random seeds to validate the robustness of the result.
However, the team is cautious: this experiment was constrained to a somewhat narrow denial-of-service (DoS) style backdoor, which causes gibberish output, not more dangerous behaviours such as data leakage, malicious code, or bypassing safety mechanisms. It's still open whether such dynamics hold for more complex, high-stakes backdoors in frontier models.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.