Google's strongest, AI Open Translation Model: Translate Gemma, mobile phone can run

January 16th.GoogleYesterday, January 15th, a blog was published, based on the Gemma 3 structure, which was published on the blog Gemma 3roll out TranslateGema opentranslation modelseriesWith a total of 4B, 12B and 27B parameter sizes, 55 core languages and multimodular image translations are now available for download in Kagle and Hugging Face。

Google's strongest, AI Open Translation Model: Translate Gemma, mobile phone can run

In terms of performance, the Google team conducted rigorous testing using the WMT24++ benchmark (55 languages in high, medium and low resource languages) and MetricX indicators。

The results showed that the quality of translation of Translate Gemma 12B exceeded the Gemma 3 27B baseline model, which was twice the size of the parameter. This means that developers need to consume only half of their calculus resources, i.e. they can obtain better translation results, thereby significantly increasing throughput and reducing delays。

AT THE SAME TIME, THE SMALLEST 4B MODEL ALSO SHOWS AMAZING POWER, WHICH IS COMPARABLE TO THE 12B BASELINE MODEL AND PROVIDES A STRONG TRANSLATION CAPABILITY FOR MOBILE AND EDGE COMPUTING DEVICES。

Technically, TranslateGemma's high-density intelligence is derived from a unique two-stage fine-tuning process。

The first is monitoring fine-tuning (SFT), Google, which uses the Gemini model to mix high-quality synthetic data with manual translation data to train Gemma 3 base; and then introducing the intensive learning (RL) phase, which leads to more linguistic and natural translations through advanced incentive models such as MetricX-QE and AutoMQM。

In terms of language coverage, TranslateGema focused on optimizing and validating 55 core languages (covering Spanish, Chinese, Hindi, etc.) and further exploring nearly 500 languages, providing a solid basis for academic research on endangered languages。

In addition, thanks to Gemma 3 ' s structural advantages, the new model retains a full multi-modular capability. Tests indicate that no additional fine-tuning of visual tasks is required and that their upgrading in the translation of texts directly enhances the translation effect of the text in the image。

To adapt to different development needs, TranslateGema's three dimensions correspond to a precise deployment scenario:

  • 4B MODELS ARE DESIGNED TO OPTIMIZE MOBILE PHONES AND PERIPHERAL EQUIPMENT TO ACHIEVE END-SIDE EFFICIENT REASONING
  • 12B MODELLED CONSUMER-GRADE LAPTOPS TO PROVIDE LOCAL DEVELOPMENT WITH RESEARCH-LEVEL PERFORMANCE
  • THE 27B MODEL IS ORIENTED TOWARDS THE PURSUIT OF AN EXTREMELY HIGH-QUALITY SCENE AND CAN BE RUN ON A SINGLE H100 GPU OR CLOUD TPU。

All models are now online in Kaggle, Hugging Face and Vertex AI。

1AI WITH REFERENCE ADDRESS:

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Grok is under investigation in California

2026-1-15 12:58:22

Information

OpenAI, 16-year-old, 30-year-old Qinghua

2026-1-16 12:29:50

Search