Search

Hugging Face Showcases How Test-Time Compute Scaling Can Help SLMs Outperform Larger AI Models

The researchers were able to improve the capabilities of open AI models using Google DeepMind’s study.

Advertisement
Highlights
  • Hugging Face was able to make the Llama 3B model outperform the 70B model
  • Test-time compute scaling allows models to “think longer” on problems
  • The researchers reverse-engineered closed models to develop the technique
Hugging Face Showcases How Test-Time Compute Scaling Can Help SLMs Outperform Larger AI Models

Reasoning models such as OpenAI’s o1 use test-time scaling to improve their output

Photo Credit: Hugging Face

Hugging Face shared a new case study last week showcasing how small language models (SLMs) can outperform larger models. In the post, the platform's researchers claimed that instead of increasing the training time of artificial intelligence (AI) models, focusing on the test-time compute can show enhanced results for AI models. The latter is an inference strategy that allows AI models to spend more time on solving a problem and offers different approaches such as self-refinement and searching against a verifier that can improve their efficiency.

How Test-Time Compute Scaling Works

In a post, Hugging Face highlighted that the traditional approach to improving the capabilities of an AI model can often be resource-intensive and extremely expensive. Typically, a technique dubbed train-time compute is used where the pretraining data and algorithms are used to improve the way a foundation model breaks down a query and gets to the solution.

Alternatively, the researchers claimed that focusing on test-time compute scaling, a technique where AI models are allowed to spend more time solving a problem and letting them correct themselves can show similar results.

Highlighting the example of OpenAI's o1 reasoning-focused model, which uses test-time compute, the researchers stated that this technique can let AI models display enhanced capabilities despite making no changes to the training data or pretraining methods. However, there was one problem. Since most reasoning models are closed, there is no way to know the strategies that are being used.

The researchers used a study by Google DeepMind and reverse engineering techniques to unravel how exactly LLM developers can scale test-time compute in the post-training phase. As per the case study, just increasing the processing time does not show significant improvement in outputs for complex queries.

Instead, the researchers recommend using a self-refinement algorithm that allows AI models to assess the responses in subsequent iterations and identify and correct potential errors. Additionally, using a verifier that models can search against can further improve the responses. Such verifiers can be a learned reward model or hard-coded heuristics.

More advanced techniques would involve a best-of-N approach where a model generates multiple responses per problem and assigns a score to judge which would be better suited. Such approaches can be paired with a reward model. Beam search, which prioritises step-by-step reasoning and assigning scores for each step, is another strategy highlighted by researchers.

By using the abovementioned strategies, the Hugging Face researchers were able to use the Llama 3B SLM and make it outperform Llama 70B, a much larger model, on the MATH-500 benchmark.

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

 
Show Full Article
Please wait...
Advertisement

Related Stories

Popular Mobile Brands
  1. Amazon Prime Day Sale: Samsung Galaxy S24 Ultra Discount Revealed
  2. OnePlus Nord 5, Nord CE 5 Launch Today: Everything You Need to Know
  3. AI+ Nova 5G, Pulse Phones India Launch Today: How to Watch Live Event
  4. How to Upgrade to BSNL 4G/ 5G SIM Card Online: A Step-by-Step Guide
  5. Samsung Smart Monitor M9 Launched in India Alongside Updated M8, M7 Models
  6. Amazon Prime Day 2025 Sale: iPhone 15 Discounted Price Revealed
  7. Honor X9c 5G With 6,600mAh Battery Launched in India: Price, Features
  8. Xiaomi Compact Power Bank 20,000mAh Launched in India: Price, Features
  9. Lumio Arc 5, Arc 7 Projectors Launched in India: Price and Specifications
  10. Here's How Much the Vivo X Fold 5 and Vivo X200 FE Might Cost in India
  1. AI+ Nova 5G, Pulse India Launch Today: Know Price, Specifications and More
  2. OnePlus Nord 5, Nord CE 5 Launch Today: Know Price, Expected Features and Specifications
  3. Realme 15 Pro 5G Leaked Render Shows Design Ahead of India Launch
  4. Samsung Smart Monitor M9 With QD-OLED Display Launched in India Alongside Refreshed M8, M7 Models
  5. Samsung Galaxy S26 Ultra Said to Get 16GB RAM, Improved Telephoto Lens, More
  6. Xiaomi Compact Power Bank 20,000mAh Launched in India With Built-In Cable: Price, Features
  7. Forza Motorsport Team 'No More', Romero Games 'Completely Closed' Following Microsoft Cuts
  8. Honor X70 Tipped to Launch With an 8,300mAh Battery, Snapdragon 6 Gen 4 SoC
  9. iPhone 15 to Get a Discount During Amazon Prime Day 2025 Sale: Price Revealed
  10. Realme 15 Series to Feature AI Edit Genie, a Voice-Enabled Photo Editing Tool
Gadgets 360 is available in
Download Our Apps
App Store App Store
Available in Hindi
App Store
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.
Trending Products »
Latest Tech News »