Researchers Create a Low-Cost Open-Source AI Model to Analyse How OpenAI’s o1 Reasons

The S1-32B AI model, developed by the researchers, is said to closely match the performance of OpenAI’s o1 model.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 6 February 2025 18:50 IST
Highlights
  • The dataset for the AI model was created using Gemini Flash Thinking
  • Qwen2.5-32B-Instruct was used as the base AI model
  • S1-32B was developed using simple time scaling techniques

The AI model was developed by researchers from the Stanford University and the Washington University

Photo Credit: Unsplash/Tara Winstead

Researchers from Stanford University and Washington University have developed an open-source artificial intelligence (AI) model that is comparable in performance to OpenAI's o1 model. The main objective of the researchers was not to create a powerful reasoning-focused model but to understand how the San Francisco-based AI firm instructed its o1 series models to perform test time scaling. Notably, the researchers were able to showcase the methodology and replicate the model's behaviour at an extremely low cost while using far fewer compute resources.

Researchers Develop S1-32B AI Model

The researchers detailed the methodology and process of developing the model in a study published in the pre-print journal arXiv. The process involved creating a synthetic dataset from a different AI model and using several new techniques such as ablation and supervised fine-tuning (SFT). The model is available in a GitHub listing.

It should be noted that the AI model was not built from scratch. The developers used the Qwen2.5-32B-Instruct and distilled it to create the s1-32B large language model (LLM). Released in September 2024, the model is capable but given its size and lack of reasoning capabilities, it cannot match up to OpenAI's o1.

Advertisement

During the process, the researchers used the Gemini Flash Thinking application processing interface (API) to generate reasoning traces and responses. A total of 59,000 triplets of questions, reasoning traces (the chain of thought or CoT), and responses were extracted from the API. A dataset called the s1K was then created by selecting 1,000 high-quality, diverse, and difficult questions as well as the reasoning traces and the responses.

Advertisement

After creating the s1K dataset, the researchers performed supervised fine-tuning on the Qwen2.5-32B-Instruct model. For this, basic fine-tuning hyperparameters were used. The distillation process took 26 minutes of training on 16 Nvidia H100 GPUs.

Till this point, the researchers had no idea how OpenAI trained the models to “think” and how it managed to stop the thinking process. Without this, a model runs the risk of overthinking indefinitely as it second-guesses its output wasting valuable processing power.

Advertisement

While fine-tuning the model, the researcher found something interesting. They found that they could manipulate the inference time by adding think XML tags. Once a model reaches the end tag, it is told to change its voice to an authoritative tone for the final answer. Notably, inference time is the near real-time responses that a typical AI model generates. Anything more than this would require careful manipulation of the code.

With the s1-32B model, the researchers added a “wait” command to force it to think beyond the usual inference period. Once added, the model began second-guessing and verifying its output. Then, the tag was used to either shorten this test time scaling phase or lengthen it.

Advertisement

Then, the researchers also experimented with several other phrases such as “alternatively”, and “hmm”, but found that the best performance metrics were achieved when using the “wait” tag. By bringing the model close to the performance of o1, the researchers claim that this might be the method used by OpenAI to fine-tune its reasoning models.

A TechCrunch report claims that the researchers were able to create the s1-32B AI model under $50 (roughly Rs. 4,380), highlighting that creating a post-training structure for reasoning models can be done at an extremely low cost.

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. iQOO Neo 11: Launch Date, Expected Price, Design, Specifications, Features, and More
  2. James Webb Telescope Uncovers the Turbulent Birth of the First Galaxies
  3. Shakti Thirumagan Now Streaming on JioHotstar: Everything You Need to Know
  1. Baai Tuzyapayi OTT Release Date: When and Where to Watch Marathi Romantic Drama Online?
  2. Maxton Hall Season 2 OTT Release Date: When and Where to Watch it Online?
  3. Shakti Thirumagan Now Streaming on JioHotstar: Everything You Need to Know About Vijay Antony’s Political Thriller
  4. Semi-Transparent Solar Cells Break Records, Promise Energy-Generating Windows and Facades
  5. Chang’e-6 Lunar Samples Reveal Water-Rich Asteroid Fragments
  6. James Webb Telescope Uncovers the Turbulent Birth of the First Galaxies
  7. Troll 2 OTT Release Date: When and Where to Watch it Online?
  8. Baramulla OTT Release Date: When and Where to Watch Gripping Thriller Set in the Heart of Kashmir Online?
  9. Lazarus Now Streaming on Amazon Prime Video: What You Need to Know
  10. Gemini October Feature Drop Brings New Features to Veo 3.1, Gemini 2.5 Flash, Canvas, and More
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.