• Home
  • Ai
  • Ai News
  • After ChatGPT Translate, Google Releases Multiple Open Source Translation Models

After ChatGPT Translate, Google Releases Multiple Open-Source Translation Models

Google has released three variants of its TranslateGemma AI models.

After ChatGPT Translate, Google Releases Multiple Open-Source Translation Models

Photo Credit: Google

Google’s TranslateGemma can be downloaded from Hugging Face and Kaggle

Click Here to Add Gadgets360 As A Trusted Source As A Preferred Source On Google
Highlights
  • TranslateGemma is available in 4B, 12B, and 27B sizes
  • The AI models can also translate text in images
  • TranslateGemma has been evaluated on 55 language pairs
Advertisement

Google's aggressive artificial intelligence (AI) push has not slowed down in 2026. The company has already announced a partnership with Apple, released new shopping tools and a protocol, introduced Personal Intelligence in Gemini and added the chatbot to its Trends website. Now, the company has shifted its focus towards the open community with the release of TranslateGemma models. These multilingual AI models are designed to support translation between a large number of languages across text and image (input only) modalities.

TranslateGemma Models Released

In a blog post, the Mountain View-based tech giant released three different variants of the TranslateGemma AI models. These models are available to download on Google's Hugging Face listing and Kaggle's website. Additionally, developers and enterprises can also access them via Vertex AI, the company's cloud-based AI hub. These models are available with a permissive licence allowing both academic and commercial use cases.

TranslateGemma is available in 4B, 12B, and 27B sizes (where 4B refers to four billion parameters). The smallest model is said to be optimised for mobile and edge deployment, and the 12B variant is designed for consumer laptops. The largest 27B model offers maximum fidelity and can be run locally on a single Nvidia H100 GPU or TPU.

Built on Gemma 3 models, the researchers used supervised fine-tuning (SFT) with a diverse dataset. The post claims that this allowed the models to achieve broad language coverage even in low-resource (where data is scarce) languages. These models were further refined using reinforcement learning (RL), which refined the translation quality.

The company claimed that the 12B TranslateGemma model outperforms Gemma 3 27B on the World Machine Translation 2024 (WMT24++) benchmark. Google claims that with this performance, developers will be able to achieve the same quality as Gemma 3 while using less than half the parameters of the baseline model.

Google's latest translation-focused AI models are claimed to be trained and evaluated on 55 language pairs across Spanish, French, Chinese, Hindi, and more. The company also claimed that it has trained the model on nearly 500 additional language pairs. Notably, apart from direct text translation, the model also accepts images as input and can detect and translate text within the images.

Comments

Catch the latest from the Consumer Electronics Show on Gadgets 360, at our CES 2026 hub.

Akash Dutta
Akash Dutta is a Chief Sub Editor at Gadgets 360. He is particularly interested in the social impact of technological developments and loves reading about emerging fields such as AI, metaverse, and fediverse. In his free time, he can be seen supporting his favourite football club - Chelsea, watching movies and anime, and sharing passionate opinions on food. More
Realme Buds Clip India Launch Timeline Confirmed: Expected Specifications, Features

Advertisement

Follow Us

Advertisement

© Copyright Red Pixels Ventures Limited 2026. All rights reserved.
Trending Products »
Latest Tech News »