{"id":48063,"date":"2025-12-31T12:01:20","date_gmt":"2025-12-31T04:01:20","guid":{"rendered":"https:\/\/www.1ai.net\/?p=48063"},"modified":"2025-12-31T12:01:20","modified_gmt":"2025-12-31T04:01:20","slug":"%e8%85%be%e8%ae%af%e6%b7%b7%e5%85%83%e5%bc%80%e6%ba%90%e7%bf%bb%e8%af%91%e6%a8%a1%e5%9e%8b-1-5%ef%bc%9a%e6%89%8b%e6%9c%ba-1gb-%e5%86%85%e5%ad%98%e5%8d%b3%e5%8f%af%e8%bf%90%e8%a1%8c%ef%bc%8c%e6%95%88","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/48063.html","title":{"rendered":"OPEN SOURCE TRANSLATION MODEL 1.5: CELL PHONE 1GB MEMORY TO RUN, GOING BEYOND COMMERCIAL API"},"content":{"rendered":"<p>The news of December 31st<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%85%be%e8%ae%af%e6%b7%b7%e5%85%83\" title=\"[View articles tagged with [Tencent Hybrid]]\" target=\"_blank\" >Tencent Hunyuan<\/a>Announce<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a><strong><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e7%bf%bb%e8%af%91%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [translation model]]\" target=\"_blank\" >translation model<\/a> 1.5 Version<\/strong>There are two models: Tencent-HY-MT1.5-1.8B and Tencent-HY-MT1.5-7B, which support the translation of 33 languages into one another and 5 Chinese\/Traditional languages, including Czech, Marathi, Estonian, Icelandic, etc\u3002<\/p>\n<p>Both models are currently in the process of being downloaded directly from open-source communities such as Github and Huggingface\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-48064\" title=\"61f63eecj00t848h00051d000ufdp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/12\/61f63eecj00t848h00051d000u000fdp.jpg\" alt=\"61f63eecj00t848h00051d000ufdp\" width=\"1080\" height=\"553\" \/><\/p>\n<ul>\n<li><strong>HY-MT1.5-1.8B PRIMARILY FOR CONSUMER-GRADE DEVICES SUCH AS MOBILE PHONES<\/strong>, supported direct end-side deployment and off-line real-time translation with a flow of 1GB memory only, and declared that, in the very small amount of parameters, the effect exceeded most commercial translation API. At the same time, compared to the mainstream commercial translation model API, HY-MT1.5-1.8B reasoning is faster, with an average of 0.18 seconds for 50 tokens and around 0.4 seconds for other models\u3002<\/li>\n<li><strong>THE HY-MT1.5-7B MODEL IS MUCH MORE EFFECTIVE THAN THE PREVIOUS VERSION<\/strong>THIS IS AN UPGRADE OF THE ORIGINAL WMT25 30-LANGUAGE TRANSLATION CHAMPION MODEL, WHICH HAS FOCUSED ON IMPROVING TRANSLATION ACCURACY AND SIGNIFICANTLY REDUCING THE MIXING OF TRANSLATIONS WITH NOTES AND LANGUAGES, AND FURTHER INCREASING ITS USEFULNESS\u3002<\/li>\n<\/ul>\n<p>IN THE CONTEXT OF THE ACTUAL USE BY SOME USERS, THE HYBRID TRANSLATION OF 1.8B AND 7B, BOTH SIZE MODELS, ARE USED AT THE SAME TIME TO ACHIEVE THE SYNERGISTIC DEPLOYMENT OF END-SIDE AND CLOUD-SIDE MODELS AND TO ENHANCE THEIR CONSISTENCY AND STABILITY\u3002<\/p>\n<p>There is a concentration of tests in Flores200, WMT25 and Manhan languages, which are used to test translations between Chinese and foreign and between English<strong>Tencent-HY-MT1.5-1.8B Comprehensively beyond medium-sized open-source models and mainstream commercial translation API<\/strong>, reach the 90-point level of the Gemini-3.0-Pro super-sized closed-source model. In the WMT25 and Minhan translation test sets, the effect was only slightly less than Gemini-3.0-Pro, far more than other models\u3002<\/p>\n<p>THE HY-MT1.5-1.8B MODEL ACHIEVED APPROXIMATELY 78% SCORES IN THE FLORES-200 QUALITY ASSESSMENT, WITH AN AVERAGE RESPONSE TIME OF 0.18 SECONDS, GOING BEYOND MAINSTREAM COMMERCIAL TRANSLATION API, FOR HIGH-VOLUME, REAL-TIME TRANSLATION SCENARIOS SUCH AS INSTANT MESSAGING, SMART PASSENGER SERVICE, MOBILE TRANSLATION APPLICATIONS, ETC\u3002<\/p>\n<p>In addition, for various scenarios, both models have achieved more comprehensive support for the translation of terminology, long conversations, text with format (e.g., web pages):<\/p>\n<ul>\n<li>First of all, the term<strong>1.5 A glossary self-defined capability<\/strong>Users can advance the construction of an exclusive glossary of terms for different trades and professional scenarios (e.g., medical, legal, financial, scientific, etc.) to ensure that key terms remain highly consistent and accurate in translation. Users can import the glossary through a simple configuration<strong>The model will prioritize user-defined standard terms during translation<\/strong>This enhances the reliability and authority of translation of professional files, technical manuals, contract texts, etc\u3002<\/li>\n<li>This is followed by context translation. The hybrid translation model, which has a long-text and dialogue context understanding capability, can be based on the continuous optimization of subsequent translations in the previous language and enhance consistency and consistency in translation in the context of long-dialogues, multiple rounds of questions and answers, consecutive paragraphs, etc. Whether it is a lengthy translation of minutes of meetings, interviews, novel chapters or technical documents, the model captures and maintains context logic, avoiding problems of confusion, semantic fragmentation or inconsistent styles\u3002<\/li>\n<li>The third is<strong>Translation capacity with format<\/strong>By following instructions, the hybrid translation model allows for the maintenance of pre- and post-translation format information to make translation results more accurate and practical\u3002<\/li>\n<\/ul>\n<p>In order to visualize the translation effects of the hybrid Tencent-HY-MT1.5-1.8B, the government shows a comparison with the results of the offline translation of the apple phone:<\/p>\n<p>Technically, HY-MT1.5-1.8B is able to achieve the effects of large-scale models with small sizes and, thanks to the introduction of the On-Policy Division strategy, HY-MT1.5-7B, as Teacher, guides the 1.8B model in real time so that it avoids hard-back-standard answers to the dead, and allows small models to learn and enhance their capabilities by correcting deviations in predicting the distribution of the series\u3002<\/p>\n<p>The Fuzzy Translation Model was not only the first of 30 in the international machine translation competition, but also the first of the HuggingFace trend lists within a week of the first opening. The Mixed Translation Model has been used in many of the in-house business scenarios<strong>INCLUDING TELECONFERENCES, BUSINESS TWEETS, QQ BROWSERS, CLIENT TRANSLATION, ETC<\/strong>.<\/p>\n<p>In order to facilitate their use by developers, this open-source model has been online in open-source communities such as Github and Huggingface, where deployment is supported by a number of platforms, such as Arm, Takatung, Intel and Musei. 1AI with open source addresses as follows:<\/p>\n<ul>\n<li>Mixed website: https:\/\/hunyuan.tencent.com\/modeSquare\/home\/list<\/li>\n<li>Github Link: https:\/\/github.com\/Tencent-Hunyuan\/HY-MT<\/li>\n<li>HuggingFace Link: https:\/\/huggingface.co\/collections\/tencent\/hy-mt15<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>On December 31, in the news of the announcement of the open-source translation model 1.5, the text consists of two models: Tencent-HY-MT1.5-1.8B and Tencent-HY-MT1.5-7B, in support of 33 language translations and 5 Chinese\/linguistics, in addition to common languages such as Chinese, English and Japanese, as well as small languages such as Czech, Marathi, Estonian and Icelandic. Both models are currently in the process of being downloaded directly from open-source communities such as Github and Huggingface. HY-MT1.5-1.8B, primarily for consumer-grade equipment such as mobile phones, quantified to support end-side direct deployment and off-line real-time translation only<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[219,7510,2657],"collection":[],"class_list":["post-48063","post","type-post","status-publish","format-standard","hentry","category-news","tag-219","tag-7510","tag-2657"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48063","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=48063"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/48063\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=48063"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=48063"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=48063"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=48063"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}