Search

Researchers Create a Low-Cost Open-Source AI Model to Analyse How OpenAI’s o1 Reasons

The S1-32B AI model, developed by the researchers, is said to closely match the performance of OpenAI’s o1 model.

Advertisement
Highlights
  • The dataset for the AI model was created using Gemini Flash Thinking
  • Qwen2.5-32B-Instruct was used as the base AI model
  • S1-32B was developed using simple time scaling techniques
Researchers Create a Low-Cost Open-Source AI Model to Analyse How OpenAI’s o1 Reasons

The AI model was developed by researchers from the Stanford University and the Washington University

Photo Credit: Unsplash/Tara Winstead

Researchers from Stanford University and Washington University have developed an open-source artificial intelligence (AI) model that is comparable in performance to OpenAI's o1 model. The main objective of the researchers was not to create a powerful reasoning-focused model but to understand how the San Francisco-based AI firm instructed its o1 series models to perform test time scaling. Notably, the researchers were able to showcase the methodology and replicate the model's behaviour at an extremely low cost while using far fewer compute resources.

Researchers Develop S1-32B AI Model

The researchers detailed the methodology and process of developing the model in a study published in the pre-print journal arXiv. The process involved creating a synthetic dataset from a different AI model and using several new techniques such as ablation and supervised fine-tuning (SFT). The model is available in a GitHub listing.

It should be noted that the AI model was not built from scratch. The developers used the Qwen2.5-32B-Instruct and distilled it to create the s1-32B large language model (LLM). Released in September 2024, the model is capable but given its size and lack of reasoning capabilities, it cannot match up to OpenAI's o1.

During the process, the researchers used the Gemini Flash Thinking application processing interface (API) to generate reasoning traces and responses. A total of 59,000 triplets of questions, reasoning traces (the chain of thought or CoT), and responses were extracted from the API. A dataset called the s1K was then created by selecting 1,000 high-quality, diverse, and difficult questions as well as the reasoning traces and the responses.

After creating the s1K dataset, the researchers performed supervised fine-tuning on the Qwen2.5-32B-Instruct model. For this, basic fine-tuning hyperparameters were used. The distillation process took 26 minutes of training on 16 Nvidia H100 GPUs.

Till this point, the researchers had no idea how OpenAI trained the models to “think” and how it managed to stop the thinking process. Without this, a model runs the risk of overthinking indefinitely as it second-guesses its output wasting valuable processing power.

While fine-tuning the model, the researcher found something interesting. They found that they could manipulate the inference time by adding think XML tags. Once a model reaches the end tag, it is told to change its voice to an authoritative tone for the final answer. Notably, inference time is the near real-time responses that a typical AI model generates. Anything more than this would require careful manipulation of the code.

With the s1-32B model, the researchers added a “wait” command to force it to think beyond the usual inference period. Once added, the model began second-guessing and verifying its output. Then, the tag was used to either shorten this test time scaling phase or lengthen it.

Then, the researchers also experimented with several other phrases such as “alternatively”, and “hmm”, but found that the best performance metrics were achieved when using the “wait” tag. By bringing the model close to the performance of o1, the researchers claim that this might be the method used by OpenAI to fine-tune its reasoning models.

A TechCrunch report claims that the researchers were able to create the s1-32B AI model under $50 (roughly Rs. 4,380), highlighting that creating a post-training structure for reasoning models can be done at an extremely low cost.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

 
Show Full Article
Please wait...
Advertisement

Related Stories

Popular Mobile Brands
  1. China's Dragon Man Skull Found to Belong to Denisovan Lineage
  2. Doom: The Dark Ages Review: Rip and Tear, Medieval Style
  1. China’s Dragon Man Skull Found to Belong to Denisovan Lineage
  2. Is Mars Really Red? A Physicist Explains the Science Behind Its Colour and More
  3. Scientists Spotted the Largest Comet Lying in the Solar System’s Outskirts with Outbursting Gases
  4. SpaceX Starship Rocket Explodes During Ground Test at Texas Launch Pad
  5. NASA Postpones Axiom Mission 4 Launch to Ensure Space Station Readiness After Repairs
  6. Doom: The Dark Ages Review: Rip and Tear, Medieval Style
  7. Save Nalla Pasanga Now Streaming on Aha Tamil: Everything You Need to Know About Romantic Web Series
  8. Yugi Tamil Movie Now Streaming on Aha: A Gritty Tale of Crime, Surrogacy, and Revenge
  9. Lovely Now Available on Amazon Prime Video: What You Need to Know About Malayalam Fantasy Drama
  10. The Hunt- The Rajiv Gandhi Assassination Case OTT Release Date Revealed
Gadgets 360 is available in
Download Our Apps
App Store App Store
Available in Hindi
App Store
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »