• Home
  • Ai
  • Ai News
  • Anthropic Tipped to Release Claude 4.5 Opus Soon, Said to Be Focused on Resisting Jailbreaks

Anthropic Tipped to Release Claude 4.5 Opus Soon, Said to Be Focused on Resisting Jailbreaks

Anthropic is tipped to have sent a new AI model to red teamers for external safety evaluation.

Anthropic Tipped to Release Claude 4.5 Opus Soon, Said to Be Focused on Resisting Jailbreaks

Photo Credit: Unsplash/Markus Winkler

Anthropic has already released Claude 4.5 Sonnet and Claude 4.5 Haiku models

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • The AI model is said to be codenamed Neptune V6
  • Anthropic is said to have issued a 10-day challenge for the model
  • The challenge is focused on finding universal jailbreaks, per the leak
Advertisement

Anthropic might be close to releasing the frontier version of its Claude 4.5 family, the Claude 4.5 Opus. As per a leak, the San Francisco-based artificial intelligence (AI) firm has shared a new large language model (LLM) with red-teamers. The model is said to be codenamed Neptune V6, and there is a focus on resisting jailbreaking attempts. Notably, the company has already released the other two models in the series, the Claude 4.5 Sonnet and the Claude 4.5 Haiku.

Anthropic Shares a New AI Model With Red-Teamers

In a post on X (formerly known as Twitter), Tibor Blaho, Lead Engineer at AIPRM, claimed that Anthropic sent the Neptune V6 LLM to red-teamers on Tuesday. Interestingly, the tipster also mentions that the AI firm has issued a 10-day challenge for the external safety evaluators. If they can find confirmed universal jailbreaks within the next 10 days, they will get extra bonuses.

If the claims are true, it appears Anthropic is really focusing on making its purported upcoming AI model secure from jailbreaks. The focus is also interesting, given that Anthropic's models are considered to be one of the safest when it comes to external attacks. The incentivisation suggests that the company could be trying to find more creative prompt injections that can break models and future-proof it.

For the unaware, a universal jailbreak (in the context of AI models) is a general trick or prompt that can get many different large language models to ignore their safety rules and produce responses they normally would refuse. Instead of targeting one specific system, these jailbreaks use patterns that exploit common weaknesses across models.

Jailbreaks work by confusing or persuading the model with clever framing. For example, asking it to roleplay, embedding instructions inside code or fake metadata, or adding weird suffixes that slip past filters. They do not need access to the model's internals; many are simple text prompts or formats that models interpret differently from their safety layers.

Notably, Anthropic released Claude 4.5 Sonnet in September and made it available to all users, including those on the free tier. Earlier this month, it also released Claude 4.5 Haiku, the company's low-latency model aimed at near real-time responses.

Comments

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Insta360 X4 Air Launched With 8K Video Recording, Support for Replaceable Lenses: Price, Specifications

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »