• Home
  • Ai
  • Ai News
  • Google Introduces Gemma 4 Open Source AI Model, Enables Building Autonomous Agents

Google Introduces Gemma 4 Open-Source AI Model, Enables Building Autonomous Agents

Gemma 4 is available in four sizes, including Effective 2B, Effective 4B, 26B MoE, and 31B Dense.

Google Introduces Gemma 4 Open-Source AI Model, Enables Building Autonomous Agents

Photo Credit: Google

Gemma 4 is available via Google AI Studio, Vertex AI, Hugging Face, Kaggle, and Ollama

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • Google is making Gemma 4 available with Apache 2.0 license
  • The open-source model is capable of multi-step planning and deep logic
  • Gemma 4 is natively trained on over 140 languages
Advertisement

Google, on Thursday, introduced Gemma 4 artificial intelligence (AI) model. The first in the Gemma 4 family comes with several improvements over its predecessors. While Gemma 3 focused on text and visual reasoning capabilities, the Mountain View-based tech giant says the latest iteration brings agentic capabilities and advanced reasoning to the open-source model. Available in four different sizes, the latest large language model (LLM) will be available across Google's developer platforms and can be downloaded via third-party repositories to run locally.

Google Releases Gemma 4

In a blog post, the tech giant announced and detailed the Gemma 4 AI model. The model is available in four different sizes and configurations, including Effective 2B (E2B), Effective 4B (E4B), 26B Mixture of Experts (MoE) and 31B Dense. The context window has also been increased to 256K tokens, up from 128K tokens in Gemma 3. Additionally, it has been trained natively on more than 140 languages.

One big change from the previous generation is that Gemma 4 is now available under the permissible Apache 2.0 license, which allows usage for both academic and commercial purposes. The LLM can be directly used via Google AI Studio and Vertex AI, or can be downloaded from the company's Hugging Face, Kaggle, and Ollama listings.

Three standout features in Gemma 4 are support for advanced reasoning, agentic workflows, and code generation. With advanced reasoning, it is now capable of multi-step planning and deep logic and is said to show improvements in mathematics and instruction following. The model is also capable of functional calling and structured JSON output, letting users power their AI agents with the model.

Additionally, Google claims that the LLM supports high-quality offline code, although it is unclear where it stands compared to proprietary tools, such as Claude Code and Codex. However, the clear advantage here is the free usage and on-device privacy and security.

Other notable feature includes native processing of videos and images with support for variable resolutions. Google claims the model supports visual tasks like OCR and chart understanding. Apart from this, the E2B and E4B models also support native audio input for speech recognition and understanding.

Comments

Get your daily dose of tech news, reviews, and insights, in under 80 characters on Gadgets 360 Turbo. Connect with fellow tech lovers on our Forum. Follow us on X, Facebook, WhatsApp, Threads and Google News for instant updates. Catch all the action on our YouTube channel.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Oppo Find X9s Pro, Find X9 Ultra Key Features, Colour Options Leaked Ahead of April 21 Launch

Advertisement

Follow Us
?>

Advertisement

© Copyright Red Pixels Ventures Limited 2026. All rights reserved.
Trending Products »
Latest Tech News »