{"id":48780,"date":"2026-01-16T12:18:05","date_gmt":"2026-01-16T04:18:05","guid":{"rendered":"https:\/\/www.1ai.net\/?p=48780"},"modified":"2026-01-16T12:18:05","modified_gmt":"2026-01-16T04:18:05","slug":"%e8%b0%b7%e6%ad%8c%e6%9c%80%e5%bc%ba-ai-%e5%bc%80%e6%94%be%e7%bf%bb%e8%af%91%e6%a8%a1%e5%9e%8b%ef%bc%9atranslategemma-%e7%99%bb%e5%9c%ba%ef%bc%8c%e6%89%8b%e6%9c%ba%e4%b9%9f%e8%83%bd%e8%b7%91","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/48780.html","title":{"rendered":"Google's strongest, AI Open Translation Model: Translate Gemma, mobile phone can run"},"content":{"rendered":"<p>January 16th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%b0%b7%e6%ad%8c\" title=\"[View articles tagged with [Google]]\" target=\"_blank\" >Google<\/a>Yesterday, January 15th, a blog was published, based on the Gemma 3 structure, which was published on the blog Gemma 3<strong>roll out <a href=\"https:\/\/www.1ai.net\/en\/tag\/translategemma\" title=\"_Other Organiser\" target=\"_blank\" >TranslateGema<\/a> open<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%bf%bb%e8%af%91%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [translation model]]\" target=\"_blank\" >translation model<\/a>series<\/strong>With a total of 4B, 12B and 27B parameter sizes, 55 core languages and multimodular image translations are now available for download in Kagle and Hugging Face\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-48781\" title=\"aa75a0f1j00t8xvx0006pd000v90hlp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/01\/aa75a0f1j00t8xvx0006pd000v900hlp.jpg\" alt=\"aa75a0f1j00t8xvx0006pd000v90hlp\" width=\"1125\" height=\"633\" \/><\/p>\n<p>In terms of performance, the Google team conducted rigorous testing using the WMT24++ benchmark (55 languages in high, medium and low resource languages) and MetricX indicators\u3002<\/p>\n<p>The results showed that the quality of translation of Translate Gemma 12B exceeded the Gemma 3 27B baseline model, which was twice the size of the parameter. This means that developers need to consume only half of their calculus resources, i.e. they can obtain better translation results, thereby significantly increasing throughput and reducing delays\u3002<\/p>\n<p>AT THE SAME TIME, THE SMALLEST 4B MODEL ALSO SHOWS AMAZING POWER, WHICH IS COMPARABLE TO THE 12B BASELINE MODEL AND PROVIDES A STRONG TRANSLATION CAPABILITY FOR MOBILE AND EDGE COMPUTING DEVICES\u3002<\/p>\n<p>Technically, TranslateGemma's high-density intelligence is derived from a unique two-stage fine-tuning process\u3002<\/p>\n<p>The first is monitoring fine-tuning (SFT), Google, which uses the Gemini model to mix high-quality synthetic data with manual translation data to train Gemma 3 base; and then introducing the intensive learning (RL) phase, which leads to more linguistic and natural translations through advanced incentive models such as MetricX-QE and AutoMQM\u3002<\/p>\n<p>In terms of language coverage, TranslateGema focused on optimizing and validating 55 core languages (covering Spanish, Chinese, Hindi, etc.) and further exploring nearly 500 languages, providing a solid basis for academic research on endangered languages\u3002<\/p>\n<p>In addition, thanks to Gemma 3 ' s structural advantages, the new model retains a full multi-modular capability. Tests indicate that no additional fine-tuning of visual tasks is required and that their upgrading in the translation of texts directly enhances the translation effect of the text in the image\u3002<\/p>\n<p>To adapt to different development needs, TranslateGema's three dimensions correspond to a precise deployment scenario:<\/p>\n<ul>\n<li>4B MODELS ARE DESIGNED TO OPTIMIZE MOBILE PHONES AND PERIPHERAL EQUIPMENT TO ACHIEVE END-SIDE EFFICIENT REASONING<\/li>\n<li>12B MODELLED CONSUMER-GRADE LAPTOPS TO PROVIDE LOCAL DEVELOPMENT WITH RESEARCH-LEVEL PERFORMANCE<\/li>\n<li>THE 27B MODEL IS ORIENTED TOWARDS THE PURSUIT OF AN EXTREMELY HIGH-QUALITY SCENE AND CAN BE RUN ON A SINGLE H100 GPU OR CLOUD TPU\u3002<\/li>\n<\/ul>\n<p>All models are now online in Kaggle, Hugging Face and Vertex AI\u3002<\/p>\n<p>1AI WITH REFERENCE ADDRESS:<\/p>\n<ul class=\"custom_reference list-paddingleft-1\">\n<li class=\"list-undefined list-reference-paddingleft\">\n<p data-vmark=\"27bb\"><a href=\"https:\/\/arxiv.org\/pdf\/2601.09012\" target=\"_blank\" rel=\"noopener\">Google TranslateGema Technical Report<\/a><\/p>\n<\/li>\n<li class=\"list-undefined list-reference-paddingleft\">\n<p data-vmark=\"a8f7\"><a href=\"https:\/\/www.kaggle.com\/models\/google\/translategemma\/\" target=\"_blank\" rel=\"noopener\">Download on Kagle<\/a><\/p>\n<\/li>\n<li class=\"list-undefined list-reference-paddingleft\">\n<p data-vmark=\"3493\"><a href=\"https:\/\/huggingface.co\/collections\/google\/translategemma\" target=\"_blank\" rel=\"noopener\">Download in Hugging Face<\/a><\/p>\n<\/li>\n<li class=\"list-undefined list-reference-paddingleft\">\n<p data-vmark=\"2c92\"><a href=\"https:\/\/colab.research.google.com\/github\/google-gemini\/gemma-cookbook\/blob\/main\/Research\/[TranslateGemma]Example.ipynb\" target=\"_blank\" rel=\"noopener\">Explore through Gemma Cookbook<\/a><\/p>\n<\/li>\n<li class=\"list-undefined list-reference-paddingleft\">\n<p data-vmark=\"1007\"><a href=\"https:\/\/console.cloud.google.com\/vertex-ai\/publishers\/google\/model-garden\/translategemma)\" target=\"_blank\" rel=\"noopener\">Deployment in Vertex AI<\/a><\/p>\n<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>On January 16th, Google released a book yesterday, January 15, which, based on the Gemma 3 architecture, launched the TranslateGema Open Translation Model series, with a total of 4B, 12B and 27B parameter sizes, supporting 55 core languages and multiple mode image translations, which are now available for download in Kaggle and Hugging Face. In terms of performance, the Google team conducted rigorous testing using the WMT24++ benchmark (55 languages in high, medium and low resource languages) and MetricX indicators. The results show that TranslateGemma 12B has more than twice as much translation as Gemma<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[8146,7510,281],"collection":[],"class_list":["post-48780","post","type-post","status-publish","format-standard","hentry","category-news","tag-translategemma","tag-7510","tag-281"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48780","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=48780"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48780\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=48780"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=48780"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=48780"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=48780"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}