Google says the AI-developed zero-day exploit was planned to be used in a mass exploitation event.
Photo Credit: Reuters
Google says threat actors are increasingly using AI for vulnerability discovery and cyberattacks
Google Threat Intelligence Group (GTIG) shared a series of developments in the cybercrime space on Tuesday. The group highlighted that, currently, artificial intelligence (AI) is being used both as an engine for adversary operations and as a high-value target for attacks. The most concerning development is the first known instance where a threat actor used an AI-developed zero-day exploit. While the attack was foiled by the tech giant, this raises fresh concerns over AI bolstering hackers and threat actors.
In a blog post, Google's cybersecurity research arm revealed several developments in which AI is being used to carry out cyberattacks. GTIG says threat actors are no longer using AI only for simple phishing emails or text generation. Instead, attackers are now applying generative AI models to more advanced parts of cyber operations, including vulnerability research, exploit development, malware creation, and defence evasion.
One of the key findings in the report is a planned mass exploitation campaign involving a zero-day vulnerability. A zero-day is a software flaw unknown to the vendor at the time attackers begin exploiting it. GTIG said it identified a threat actor using a zero-day exploit that it believes was developed with assistance from AI tools. Google said it discovered the vulnerability before the attackers could use it at scale and worked with the affected vendor to patch the issue.
The exploit reportedly targeted a popular open-source web administration tool and allowed attackers to bypass two-factor authentication (2FA), a security system that normally requires a second verification step in addition to a password.
Google said signs within the exploit code suggested AI involvement. These included AI-style coding patterns, explanatory comments, and even a fabricated vulnerability severity score generated in a format commonly associated with large language models.
Beyond vulnerability discovery, GTIG said attackers are also using AI to accelerate malware development and improve operational efficiency. According to the report, AI-assisted coding is helping threat actors create more adaptable malware and obfuscation systems designed to evade security software.
Google specifically pointed to malware families such as PROMPTSPY, which the company described as an example of AI-enabled malware capable of interpreting system states and dynamically generating commands. In simpler terms, the malware can adapt its behaviour depending on the environment it encounters on an infected machine.
The report also said attackers linked to China, North Korea, and Russia have shown increasing interest in using AI models for vulnerability research and attack workflows. In some cases, threat actors reportedly used AI systems to analyse known vulnerabilities, validate proof-of-concept exploits, and improve malicious infrastructure.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.