Anthropic said this is the first documented case of a large-scale cyberattack executed with minimal human intervention.
Anthropic banned the hackers’ accounts, notified the impacted entities, and coordinated with authorities
Photo Credit: Unsplash/Desola Lanre-Ologun
Claude was used for a large-scale agentic cyberattack in September, Anthropic admitted on Thursday. This attack was largely carried out by the artificial intelligence (AI) system with only minimal human intervention, making it the first-of-its-kind incident. The San Francisco-based AI firm claimed that the threat actor behind the operation was a Chinese state-sponsored group that targeted multiple large corporations and government agencies. Despite strict guardrails, the hackers were able to push Claude to perform the cyberattack by using jailbreaking techniques, the company stated.
In a newsroom post, Anthropic made a startling disclosure that its large language model (LLM) platform, Claude Code, was manipulated by a Chinese state-sponsored adversary to carry out an agentic cyber-espionage campaign. The company shared the details of the case publicly to help stakeholders strengthen its cybersecurity measures and prepare for more such AI-driven attacks in the future.
The incident unfolded in mid-September 2025 when the threat actor “jail-broke” Claude by breaking its guardrails. They did this by decomposing their instructions into seemingly benign subtasks, presenting the model with the fake identity of a legitimate cybersecurity contractor. Once trust was established, Claude was used as an autonomous tool, scanning target networks, writing exploit code, harvesting credentials, extracting data and producing documentation of the hack. Humans were involved only at a handful of critical decision-points (estimated four to six per campaign).
The report indicates roughly 30 global targets across technology firms, financial institutions, chemical-manufacturing companies and government agencies. In some cases, infiltration succeeded. Crucially, the bulk of the work, around 80-90 percent, was undertaken by the AI model itself.
The distinguishing element here is the model's autonomous role. While previous cyber-incidents have involved AI in support of human hackers, this is the first documented case in which a model executed a large-scale operation with minimal human intervention. Anthropic highlighted that advanced models today have grown sophisticated enough to carry out such attacks, and the agentic ability to invoke external tools only multiplies this ability.
Anthropic warns that the lowering of barriers to entry for high-end cyberattacks is now real. Even less-resourced adversaries could now use agentic models to scale operations. The firm highlighted the need for improved detection systems, threat-sharing across industry and government, and strong safety controls built into AI platforms.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.