Researchers Create a Low-Cost Open-Source AI Model to Analyse How OpenAI’s o1 Reasons

The S1-32B AI model, developed by the researchers, is said to closely match the performance of OpenAI’s o1 model.

Advertisement
Written by Akash Dutta, Edited by Siddharth Suvarna | Updated: 6 February 2025 18:50 IST
Highlights
  • The dataset for the AI model was created using Gemini Flash Thinking
  • Qwen2.5-32B-Instruct was used as the base AI model
  • S1-32B was developed using simple time scaling techniques

The AI model was developed by researchers from the Stanford University and the Washington University

Photo Credit: Unsplash/Tara Winstead

Researchers from Stanford University and Washington University have developed an open-source artificial intelligence (AI) model that is comparable in performance to OpenAI's o1 model. The main objective of the researchers was not to create a powerful reasoning-focused model but to understand how the San Francisco-based AI firm instructed its o1 series models to perform test time scaling. Notably, the researchers were able to showcase the methodology and replicate the model's behaviour at an extremely low cost while using far fewer compute resources.

Researchers Develop S1-32B AI Model

The researchers detailed the methodology and process of developing the model in a study published in the pre-print journal arXiv. The process involved creating a synthetic dataset from a different AI model and using several new techniques such as ablation and supervised fine-tuning (SFT). The model is available in a GitHub listing.

It should be noted that the AI model was not built from scratch. The developers used the Qwen2.5-32B-Instruct and distilled it to create the s1-32B large language model (LLM). Released in September 2024, the model is capable but given its size and lack of reasoning capabilities, it cannot match up to OpenAI's o1.

Advertisement

During the process, the researchers used the Gemini Flash Thinking application processing interface (API) to generate reasoning traces and responses. A total of 59,000 triplets of questions, reasoning traces (the chain of thought or CoT), and responses were extracted from the API. A dataset called the s1K was then created by selecting 1,000 high-quality, diverse, and difficult questions as well as the reasoning traces and the responses.

Advertisement

After creating the s1K dataset, the researchers performed supervised fine-tuning on the Qwen2.5-32B-Instruct model. For this, basic fine-tuning hyperparameters were used. The distillation process took 26 minutes of training on 16 Nvidia H100 GPUs.

Till this point, the researchers had no idea how OpenAI trained the models to “think” and how it managed to stop the thinking process. Without this, a model runs the risk of overthinking indefinitely as it second-guesses its output wasting valuable processing power.

Advertisement

While fine-tuning the model, the researcher found something interesting. They found that they could manipulate the inference time by adding think XML tags. Once a model reaches the end tag, it is told to change its voice to an authoritative tone for the final answer. Notably, inference time is the near real-time responses that a typical AI model generates. Anything more than this would require careful manipulation of the code.

With the s1-32B model, the researchers added a “wait” command to force it to think beyond the usual inference period. Once added, the model began second-guessing and verifying its output. Then, the tag was used to either shorten this test time scaling phase or lengthen it.

Advertisement

Then, the researchers also experimented with several other phrases such as “alternatively”, and “hmm”, but found that the best performance metrics were achieved when using the “wait” tag. By bringing the model close to the performance of o1, the researchers claim that this might be the method used by OpenAI to fine-tune its reasoning models.

A TechCrunch report claims that the researchers were able to create the s1-32B AI model under $50 (roughly Rs. 4,380), highlighting that creating a post-training structure for reasoning models can be done at an extremely low cost.

 

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.

Advertisement

Related Stories

Popular Mobile Brands
  1. Sarvam Maya Streams on JioHotstar From January 30: Details
  2. Nathan Fillion's The Rookie Season 8 Now Available for Streaming
  3. Researchers Uncover Potential 9-Month 'Wobble' in Nearby Gas Giant
  1. Sarvam Maya Set for OTT Release on JioHotstar: All You Need to Know About Nivin Pauly’s Horror Comedy
  2. Europa’s Hidden Ocean Could Be ‘Fed’ by Sinking Salted Ice; New Study Boosts Hopes for Alien Life
  3. The Rookie Season 8 Now Available for Streaming Online: Where to Watch Nathan Fillion-Starrer Cop Drama Online?
  4. Scientists Search the Big Bang’s Afterglow for Signs of Colliding Parallel Universes
  5. Giant Ancient Collision May Have ‘Flipped’ the Moon’s Interior, Study Suggests
  6. VLT’s GRAVITY Instrument Detects ‘Tug’ from Colossal Exomoon; Could Be Largest Natural Satellite Ever Found
  7. Young Sherlock Now Set for OTT Release on OTT: What You Need to Know About Guy Ritchie’s Mystery Thriller
  8. NASA’s Miner++ AI Brings Machine Digs Into TESS Archive to the Hunt for Nearby Earth-Like Worlds
  9. iQOO 15 Ultra Confirmed to Feature Touch-based Shoulder Triggers With Haptic Feedback
  10. Invincible Season 4 OTT Release: When and Where to Watch the Highly Anticipated Viltrumite War Online?
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2026. All rights reserved.