DeepSeek and Tsinghua Developing Self-Improving AI Models

DeepSeek is calling these new models DeepSeek-GRM — short for “generalist reward modeling”.

Advertisement
By Saritha Rai, Bloomberg | Updated: 7 April 2025 13:38 IST
Highlights
  • DeepSeek is exploring ways make AI models more efficient
  • The aim is to bring AI models in alignment with human preferances
  • DeepSeek's AI revamp strategy uses fewer computing resources
DeepSeek and Tsinghua Developing Self-Improving AI Models

DeepSeek roiled markets with its low-cost reasoning AI model back in January this year

Photo Credit: Reuters

DeepSeek is working with Tsinghua University on reducing the training its AI models need in an effort to lower operational costs.

The Chinese startup, which roiled markets with its low-cost reasoning model that emerged in January, collaborated with researchers from the Beijing institution on a paper detailing a novel approach to reinforcement learning to make models more efficient.

The new method aims to help artificial intelligence models better adhere to human preferences by offering rewards for more accurate and understandable responses, the researchers wrote. Reinforcement learning has proven effective in speeding up AI tasks in narrow applications and spheres. However, expanding it to more general applications has proven challenging — and that's the problem that DeepSeek's team is trying to solve with something it calls self-principled critique tuning. The strategy outperformed existing methods and models on various benchmarks and the result showed better performance with fewer computing resources, according to the paper.

DeepSeek is calling these new models DeepSeek-GRM — short for “generalist reward modeling” — and will release them on an open source basis, the company said. Other AI developers, including Chinese tech giant Alibaba Group Holding. and San Francisco-based OpenAI, are also pushing into a new frontier of improving reasoning and self-refining capabilities while an AI model is performing tasks in real time.

Advertisement

Menlo Park, California-based Meta Platforms Inc. released its latest family of AI models, Llama 4, over the weekend and marked them as its first to use the Mixture of Experts (MoE) architecture. DeepSeek's models rely significantly on MoE to make more efficient use of resources, and Meta benchmarked its new release against the Hangzhou-based startup. DeepSeek hasn't specified when it might release its next flagship model.

© 2025 Bloomberg LP

(This story has not been edited by NDTV staff and is auto-generated from a syndicated feed.)

 

For the latest tech news and reviews, follow Gadgets 360 on X, Facebook, WhatsApp, Threads and Google News. For the latest videos on gadgets and tech, subscribe to our YouTube channel. If you want to know everything about top influencers, follow our in-house Who'sThat360 on Instagram and YouTube.

Advertisement

Related Stories

Popular Mobile Brands
  1. First Copy Now Streaming Online: Know More About Cast, Plot, and More
  2. Ghaati OTT Release Date: When and WHere to Watch Telugu Crime Drama Online?
  1. 1,000-Year-Old Mummy Found by Gas Workers in Peru Linked to Chancay Culture
  2. Radio Signal from Early Universe May Reveal the Masses of the First Stars
  3. Ancient Tel Dan Temple Reveals Centuries-Old Phoenician Ritual Bathing Traditions
  4. James Webb Telescope Spots Planet Formation in Harshest Known Galactic Environments
  5. Massive X-Class Solar Flare Erupts, Causing Widespread Pacific Radio Blackouts
  6. Azadi OTT Release Revealed Online: Where to Watch it Online?
  7. First Copy Now Streaming on Amazon MX Player: Everything You Need to Know About Munawar Faruqui Starrer Drama Series
  8. Vir Das: Fool Volume OTT Release Date Revealed: Know When and Where to Watch
  9. Ghaati OTT Release Date: When and WHere to Watch Telugu Crime Drama Online?
  10. Ghatikachalam Now Streaming on Amazon Prime Video: What You Need to Know About Telugu Psychological Horror Drama
Gadgets 360 is available in
Download Our Apps
Available in Hindi
© Copyright Red Pixels Ventures Limited 2025. All rights reserved.