• Home
  • Ai
  • Ai News
  • Nvidia Unveils Nemotron 3 Super Open Source AI Model for Agentic AI Systems

Nvidia Unveils Nemotron 3 Super Open-Source AI Model for Agentic AI Systems

Nvidia Nemotron 3 Super is a 120‑billion‑parameter open model with 12 billion active parameters.

Nvidia Unveils Nemotron 3 Super Open-Source AI Model for Agentic AI Systems

Nemotron 3 Super uses a hybrid mixture‑of‑experts (MoE) architecture

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • The model can be downloaded from the Nvidia website and Hugging Face
  • It is being used as one of the models for Perplexity Computer
  • Nemotron 3 Super has a 1‑million‑token context window
Advertisement

Nvidia released a new open-source artificial intelligence (AI) designed to handle complex agentic workflows. Dubbed Nemotron 3 Super, it is a hybrid mixture-of-experts (MoE) model that combines advanced reasoning capabilities and is said to complete tasks with high accuracy for autonomous agents. The new model is already being deployed by several AI firms, including Perplexity, for its new agentic Computer platform. Additionally, it is also being hosted on public repositories to let interested individuals download and run the model locally.

Nvidia's Nemotron 3 Super AI Model Released

In a blog post, the tech giant announced and detailed the new open-source AI model. Part of the Nemotron 3 family, the Nemotron 3 Super is currently being hosted on Nvidia's website, Hugging Face platform, Perplexity, and OpenRouter. Additionally, it is also being brought to the Dell Enterprise Hub and is optimised for on-premise deployment on the Dell AI Factory.

The latest model solves the problem of context and the increased cost of reasoning. AI models developed for agentic workflows tend to generate a higher number of tokens, as the interaction of each agent or sub-agent requires sending the full context. Similarly, executing complex tasks requires multi-level thinking, which can substantially drive up the costs of running the model.

With its hybrid architecture, the Nemotron 3 Super comes with a total of 120 billion parameters and 12 billion active parameters. It also gets a context window of one million tokens, which allows agents to retain full workflow memory. Additionally, its development also utilised a technique dubbed Latent MoE, which improves accuracy by activating four experts for the cost of one to generate the next token at inference.

The tech giant said it is releasing the open-source model with open weights under a permissive licence. On the dataset and training, the company says the Nemotron 3 Super was trained on synthetic data generated using frontier reasoning models. Nvidia said it is publishing the complete methodology, including more than 10 trillion tokens or pre and post-training datasets, 15 training environments for reinforcement learning and evaluation recipes.

Comments

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Researchers Discover MediaTek Chip Vulnerability That Could Impact Millions of Android Phones

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2026. All rights reserved.
Trending Products »
Latest Tech News »