• Home
  • Ai
  • Ai News
  • Google’s Gemini 3 Reportedly Jailbroken in Minutes, Generates Ways to Create Smallpox Virus

Google’s Gemini 3 Reportedly Jailbroken in Minutes, Generates Ways to Create Smallpox Virus

South Korean researchers reportedly breached Gemini 3 Pro’s safety measures within five minutes.

Google’s Gemini 3 Reportedly Jailbroken in Minutes, Generates Ways to Create Smallpox Virus

Photo Credit: Google

Gemini 3 Pro is Google’s latest and the most advanced AI model

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • South Korean Aim Intelligence demonstrated the jailbreaking technique
  • The researchers found the Smallpox virus generation methods viable
  • They asked Gemini 3 Pro to make a presentation about its safety failures
Advertisement

Google's Gemini 3 Pro was reportedly jailbroken by a group of researchers recently. As per the report, the researchers were able to break into the Gemini 3-powered artificial intelligence (AI) chatbot within five minutes, and were able to make it generate harmful and potentially dangerous outputs. The demonstration reveals several security flaws in the chatbot's system, and in the wrong hands, it could lead to concerning consequences. Notably, last month Anthropic also reported that its Claude AI model was used to execute a “first-of-its-kind” cyberattack on several companies and government agencies.

Gemini 3 Pro Jailbroken in Minutes, Claim Researchers

According to South Korean publication Maeil Business Newspaper, the country's startup Aim Intelligence was able to jailbreak Google's AI chatbot without much resistance. The startup specialises in red-teaming, which means stress-testing AI models to detect and expose vulnerabilities in safety mechanisms. Notably, jailbreaking means using prompt-based (non-invasive) methods to get an AI model to perform tasks it was not designed to do.

The publication claimed that Aim Intelligence was not only able to break into Gemini 3 Pro, but also able to get it to generate methods to create the Smallpox virus. The researchers reportedly mentioned that the methods were not only detailed, but also viable. This reveals a major security vulnerability in the model that could also be exploited by bad actors.

Apart from this, the team reportedly also got the AI model to create a website that displays hazardous information, such as manufacturing sarin gas and homemade explosives. It is said to also have generated a presentation “satirising its security failure situation,” with the resultant slide show titled “Excused Stupid Gemini 3.”

"Recent models are not only good at responding, but also have the ability to actively avoid, such as using bypass strategies and concealment prompts, making it more difficult to respond. It is a problem that all models experience in common. It will be important to comprehensively understand the vulnerability points of each model and align them with service policies,” a researcher told the publication.

It is unclear if the researchers flagged these vulnerabilities to Google, and if the Mountain View-based tech giant has taken corrective measures.

Comments

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
How to Recover a Hacked WhatsApp Account: Step-by-Step Guide and Security Tips

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »