OpenAI says prompt injections remain a key risk for AI browsers and is using an AI attacker to train ChatGPT Atlas.
Photo Credit: Unsplash/Glenn Carstens-Peters
The AI attacker will help ChatGPT Atlas learn to tackle evolving prompt injection techniques
OpenAI called prompt injections “one of the most significant risks” and a “long-term AI security challenge” for artificial intelligence (AI) browsers with agentic capabilities, on Monday. The San Francisco-based AI giant highlighted how the cyberattack technique impacts its ChatGPT Atlas browser and shared a new approach to tackle it. The company is using an AI-powered attacker that simulates real-world prompt injection attempts to train browsers. OpenAI said the goal is not to eliminate the threat, but to continuously harden the system as new attack patterns emerge.
Prompt injection is a technique where an attacker hides instructions using HTML tricks, such as zero font, white-on-white text, or out-of-margin text. This is hidden within normal-looking content that an AI agent is meant to read, such as a webpage, document, or snippet of text. When it processes that content, it may mistakenly treat the hidden instruction as a legitimate command, even though it was not issued by the user. It can then carry out malicious acts due to the access privilege of the AI browser.
In a post, OpenAI explained that prompt injections can be direct, where an attacker clearly tries to override the model's instructions, or indirect, where malicious prompts are embedded inside otherwise normal content. Because ChatGPT Atlas reads and reasons over third-party webpages, it may encounter instructions that were never intended for it but are crafted to influence its behaviour.
To address this, the AI giant has built an automated AI attacker, effectively a system that continuously generates new prompt injection attempts as a simulation. This attacker is used during training and evaluation to stress-test Atlas, exposing weaknesses before they are exploited outside the lab. OpenAI said this allows its teams to identify vulnerabilities faster and update defences more frequently than relying on manual testing alone.
“Prompt injection, like scams and social engineering, is not something we expect to ever fully solve,” OpenAI wrote in the post, adding that the challenge evolves as AI systems become more capable, gaining more permissions and the ability to take more actions. Instead, the company is focusing on layered defences, combining automated attacks, reinforcement learning and policy enforcement to reduce the impact of malicious instructions.
The company said its AI attacker helps create a rapid feedback loop, where new forms of prompt injection discovered by the system can be used to immediately retrain and adjust Atlas. This mirrors how security teams respond to evolving threats on the web, where attackers constantly adapt to new safeguards.
OpenAI did not claim that Atlas is immune to prompt injections. Instead, it framed the work as part of an ongoing effort to keep pace with a problem that changes alongside the technology itself. As AI browsers become more capable and more widely used, the company said sustained investment in automated testing and defensive training will be necessary to limit abuse.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.
Realme 16 Pro Series Camera Features Revealed; Realme Buds Air 8 Launch Date Announced