South Korean researchers reportedly breached Gemini 3 Pro’s safety measures within five minutes.
Gemini 3 Pro is Google’s latest and the most advanced AI model
Photo Credit: Google
Google's Gemini 3 Pro was reportedly jailbroken by a group of researchers recently. As per the report, the researchers were able to break into the Gemini 3-powered artificial intelligence (AI) chatbot within five minutes, and were able to make it generate harmful and potentially dangerous outputs. The demonstration reveals several security flaws in the chatbot's system, and in the wrong hands, it could lead to concerning consequences. Notably, last month Anthropic also reported that its Claude AI model was used to execute a “first-of-its-kind” cyberattack on several companies and government agencies.
According to South Korean publication Maeil Business Newspaper, the country's startup Aim Intelligence was able to jailbreak Google's AI chatbot without much resistance. The startup specialises in red-teaming, which means stress-testing AI models to detect and expose vulnerabilities in safety mechanisms. Notably, jailbreaking means using prompt-based (non-invasive) methods to get an AI model to perform tasks it was not designed to do.
The publication claimed that Aim Intelligence was not only able to break into Gemini 3 Pro, but also able to get it to generate methods to create the Smallpox virus. The researchers reportedly mentioned that the methods were not only detailed, but also viable. This reveals a major security vulnerability in the model that could also be exploited by bad actors.
Apart from this, the team reportedly also got the AI model to create a website that displays hazardous information, such as manufacturing sarin gas and homemade explosives. It is said to also have generated a presentation “satirising its security failure situation,” with the resultant slide show titled “Excused Stupid Gemini 3.”
"Recent models are not only good at responding, but also have the ability to actively avoid, such as using bypass strategies and concealment prompts, making it more difficult to respond. It is a problem that all models experience in common. It will be important to comprehensively understand the vulnerability points of each model and align them with service policies,” a researcher told the publication.
It is unclear if the researchers flagged these vulnerabilities to Google, and if the Mountain View-based tech giant has taken corrective measures.
Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.