Researchers Create a Low-Cost Open-Source AI Model to Analyse How OpenAI’s o1 Reasons

The S1-32B AI model, developed by the researchers, is said to closely match the performance of OpenAI’s o1 model.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 6 February 2025 18:50 IST
Highlights
  • The dataset for the AI model was created using Gemini Flash Thinking
  • Qwen2.5-32B-Instruct was used as the base AI model
  • S1-32B was developed using simple time scaling techniques
Researchers Create a Low-Cost Open-Source AI Model to Analyse How OpenAI’s o1 Reasons

The AI model was developed by researchers from the Stanford University and the Washington University

Photo Credit: Unsplash/Tara Winstead

Researchers from Stanford University and Washington University have developed an open-source artificial intelligence (AI) model that is comparable in performance to OpenAI's o1 model. The main objective of the researchers was not to create a powerful reasoning-focused model but to understand how the San Francisco-based AI firm instructed its o1 series models to perform test time scaling. Notably, the researchers were able to showcase the methodology and replicate the model's behaviour at an extremely low cost while using far fewer compute resources.

Researchers Develop S1-32B AI Model

The researchers detailed the methodology and process of developing the model in a study published in the pre-print journal arXiv. The process involved creating a synthetic dataset from a different AI model and using several new techniques such as ablation and supervised fine-tuning (SFT). The model is available in a GitHub listing.

It should be noted that the AI model was not built from scratch. The developers used the Qwen2.5-32B-Instruct and distilled it to create the s1-32B large language model (LLM). Released in September 2024, the model is capable but given its size and lack of reasoning capabilities, it cannot match up to OpenAI's o1.

During the process, the researchers used the Gemini Flash Thinking application processing interface (API) to generate reasoning traces and responses. A total of 59,000 triplets of questions, reasoning traces (the chain of thought or CoT), and responses were extracted from the API. A dataset called the s1K was then created by selecting 1,000 high-quality, diverse, and difficult questions as well as the reasoning traces and the responses.

Advertisement

After creating the s1K dataset, the researchers performed supervised fine-tuning on the Qwen2.5-32B-Instruct model. For this, basic fine-tuning hyperparameters were used. The distillation process took 26 minutes of training on 16 Nvidia H100 GPUs.

Till this point, the researchers had no idea how OpenAI trained the models to “think” and how it managed to stop the thinking process. Without this, a model runs the risk of overthinking indefinitely as it second-guesses its output wasting valuable processing power.

Advertisement

While fine-tuning the model, the researcher found something interesting. They found that they could manipulate the inference time by adding think XML tags. Once a model reaches the end tag, it is told to change its voice to an authoritative tone for the final answer. Notably, inference time is the near real-time responses that a typical AI model generates. Anything more than this would require careful manipulation of the code.

With the s1-32B model, the researchers added a “wait” command to force it to think beyond the usual inference period. Once added, the model began second-guessing and verifying its output. Then, the tag was used to either shorten this test time scaling phase or lengthen it.

Advertisement

Then, the researchers also experimented with several other phrases such as “alternatively”, and “hmm”, but found that the best performance metrics were achieved when using the “wait” tag. By bringing the model close to the performance of o1, the researchers claim that this might be the method used by OpenAI to fine-tune its reasoning models.

A TechCrunch report claims that the researchers were able to create the s1-32B AI model under $50 (roughly Rs. 4,380), highlighting that creating a post-training structure for reasoning models can be done at an extremely low cost.

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. Sam Altman Reportedly Drops Clues About 'Secret' AI Device With Jony Ive
  2. Mistral's Coding Agent Devstral Outperforms OpenAI's GPT-4.1 Mini
  3. OpenAI Buys iPhone Designer Jony Ive's AI Device Startup for $6.5 Billion
  4. LinkedIn Is Now Letting Users Search for Ideal Jobs Using GenAI
  5. SpaceX Successfully Launches 23 Starlink Satellites on Brand-New Falcon 9 Rocket
  6. Apple Adds iPhone 7 Plus, iPhone 8 to Vintage and Obsolete Products List
  1. SpaceX Successfully Launches 23 Starlink Satellites on Brand-New Falcon 9 Rocket
  2. Polaris Wasn’t Always the North Star: How Earth’s Wobble Shifts the Celestial Pole
  3. Scientists Warn of Inadequate Solar Storm Forecasting: What You Need to Know
  4. NASA’s Perseverance Explores Mars' Oldest Rocks in Krokodillen Region
  5. New Study Uses AI to Reveal Dry Origins of Mars’ Mysterious Slope Streaks
  6. Ancient 14,000-Year-Old Solar Storm Revealed as Strongest Ever Recorded in Earth’s History
  7. New Study Confirms TeV Halos Are Common in Middle-Aged Pulsars
  8. Capuchin Monkeys Abduct Baby Howler Monkeys on Panama’s Jicarón Island, New Study Reveals
  9. Sneaky Links: Dating After Dark Now Streaming on Netflix: What You Need to Know
  10. Devika & Danny OTT Release Date Revealed: When and Where to Watch It Online?
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.