Search

OpenAI Adds a New ‘Instructional Hierarchy’ Protocol to Prevent Jailbreaking Incidents in GPT-4o Mini

OpenAI’s Instructional Hierarchy lets AI know how models should behave when instructions of different priorities conflict.

Advertisement
Highlights
  • OpenAI said the technique will stop issues of prompt injections as well
  • GPT-4o Mini is the first OpenAI AI model to get this new safety measure
  • The AI model has a context window of 128,000 tokens
OpenAI Adds a New ‘Instructional Hierarchy’ Protocol to Prevent Jailbreaking Incidents in GPT-4o Mini

GPT-4o Mini, which was released last week, is now the default mode on ChatGPT

Photo Credit: Unsplash/Solen Feyissa

OpenAI released a new artificial intelligence (AI) model dubbed GPT-4o Mini last week, which has new safety and security measures to protect it from harmful usage. The large language model (LLM) is built with a technique called Instructional Hierarchy, which will stop malicious prompt engineers from jailbreaking the AI model. The company said the technique will also show an increased resistance towards issues such as prompt injections and system prompt extractions. As per the company, the new method has improved the robustness score of the AI model by 63 percent.

OpenAI Builts a New Safety Framework

In a research paper, which is published in the online pre-print journal (non-peer-reviewed) arXiv, the AI firm explained the new technique and how it functions. To understand Instructional Hierarchy, jailbreaking needs to be explained first. Jailbreaking is a privilege escalation exploit that uses certain flaws in the software to make it do things it is not programmed to.

In the early days of ChatGPT, many people attempted to make the AI generate offensive or harmful text by tricking it into forgetting the original programming. Such prompts often began with “Forget all previous instructions and do this…” While ChatGPT has come a long way from there and malicious prompt engineering is more difficult, bad actors have also become more strategic in the attempt.

To combat issues where the AI model generates not only offensive text or images but also harmful content such as methods to create a chemical explosive or ways to hack a website, OpenAI is now using the Instructional Hierarchy technique. Put simply, the technique dictates how models should behave when instructions of different priorities conflict.

By creating a hierarchical structure, the company can keep its instructions at the highest priority, which will make it very difficult for any prompt engineer to break, as the AI will always follow the order of priority when it is asked to generate something it was not initially programmed to.

The company claims that it saw an improvement of 63 percent in robustness scores. However, there is a risk that the AI might refuse to listen to the lowest-level instructions. OpenAI's research paper has also outlined several refinements to improve the technique in future. One of the key areas of focus is handling other modalities such as images or audio which can also contain injected instructions.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

 
Show Full Article
Please wait...
Advertisement

Related Stories

Popular Mobile Brands
  1. China's Dragon Man Skull Found to Belong to Denisovan Lineage
  2. The Hunt- The Rajiv Gandhi Assassination Case OTT Release Date Revealed
  1. China’s Dragon Man Skull Found to Belong to Denisovan Lineage
  2. Is Mars Really Red? A Physicist Explains the Science Behind Its Colour and More
  3. Scientists Spotted the Largest Comet Lying in the Solar System’s Outskirts with Outbursting Gases
  4. SpaceX Starship Rocket Explodes During Ground Test at Texas Launch Pad
  5. NASA Postpones Axiom Mission 4 Launch to Ensure Space Station Readiness After Repairs
  6. Doom: The Dark Ages Review: Rip and Tear, Medieval Style
  7. Save Nalla Pasanga Now Streaming on Aha Tamil: Everything You Need to Know About Romantic Web Series
  8. Yugi Tamil Movie Now Streaming on Aha: A Gritty Tale of Crime, Surrogacy, and Revenge
  9. Lovely Now Available on Amazon Prime Video: What You Need to Know About Malayalam Fantasy Drama
  10. The Hunt- The Rajiv Gandhi Assassination Case OTT Release Date Revealed
Gadgets 360 is available in
Download Our Apps
App Store App Store
Available in Hindi
App Store
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »