OPEN SOURCE TRANSLATION MODEL 1.5: CELL PHONE 1GB MEMORY TO RUN, GOING BEYOND COMMERCIAL API

The news of December 31stTencent HunyuanAnnounceOpen Sourcetranslation model 1.5 VersionThere are two models: Tencent-HY-MT1.5-1.8B and Tencent-HY-MT1.5-7B, which support the translation of 33 languages into one another and 5 Chinese/Traditional languages, including Czech, Marathi, Estonian, Icelandic, etc。

Both models are currently in the process of being downloaded directly from open-source communities such as Github and Huggingface。

OPEN SOURCE TRANSLATION MODEL 1.5: CELL PHONE 1GB MEMORY TO RUN, GOING BEYOND COMMERCIAL API

  • HY-MT1.5-1.8B PRIMARILY FOR CONSUMER-GRADE DEVICES SUCH AS MOBILE PHONES, supported direct end-side deployment and off-line real-time translation with a flow of 1GB memory only, and declared that, in the very small amount of parameters, the effect exceeded most commercial translation API. At the same time, compared to the mainstream commercial translation model API, HY-MT1.5-1.8B reasoning is faster, with an average of 0.18 seconds for 50 tokens and around 0.4 seconds for other models。
  • THE HY-MT1.5-7B MODEL IS MUCH MORE EFFECTIVE THAN THE PREVIOUS VERSIONTHIS IS AN UPGRADE OF THE ORIGINAL WMT25 30-LANGUAGE TRANSLATION CHAMPION MODEL, WHICH HAS FOCUSED ON IMPROVING TRANSLATION ACCURACY AND SIGNIFICANTLY REDUCING THE MIXING OF TRANSLATIONS WITH NOTES AND LANGUAGES, AND FURTHER INCREASING ITS USEFULNESS。

IN THE CONTEXT OF THE ACTUAL USE BY SOME USERS, THE HYBRID TRANSLATION OF 1.8B AND 7B, BOTH SIZE MODELS, ARE USED AT THE SAME TIME TO ACHIEVE THE SYNERGISTIC DEPLOYMENT OF END-SIDE AND CLOUD-SIDE MODELS AND TO ENHANCE THEIR CONSISTENCY AND STABILITY。

There is a concentration of tests in Flores200, WMT25 and Manhan languages, which are used to test translations between Chinese and foreign and between EnglishTencent-HY-MT1.5-1.8B Comprehensively beyond medium-sized open-source models and mainstream commercial translation API, reach the 90-point level of the Gemini-3.0-Pro super-sized closed-source model. In the WMT25 and Minhan translation test sets, the effect was only slightly less than Gemini-3.0-Pro, far more than other models。

THE HY-MT1.5-1.8B MODEL ACHIEVED APPROXIMATELY 78% SCORES IN THE FLORES-200 QUALITY ASSESSMENT, WITH AN AVERAGE RESPONSE TIME OF 0.18 SECONDS, GOING BEYOND MAINSTREAM COMMERCIAL TRANSLATION API, FOR HIGH-VOLUME, REAL-TIME TRANSLATION SCENARIOS SUCH AS INSTANT MESSAGING, SMART PASSENGER SERVICE, MOBILE TRANSLATION APPLICATIONS, ETC。

In addition, for various scenarios, both models have achieved more comprehensive support for the translation of terminology, long conversations, text with format (e.g., web pages):

  • First of all, the term1.5 A glossary self-defined capabilityUsers can advance the construction of an exclusive glossary of terms for different trades and professional scenarios (e.g., medical, legal, financial, scientific, etc.) to ensure that key terms remain highly consistent and accurate in translation. Users can import the glossary through a simple configurationThe model will prioritize user-defined standard terms during translationThis enhances the reliability and authority of translation of professional files, technical manuals, contract texts, etc。
  • This is followed by context translation. The hybrid translation model, which has a long-text and dialogue context understanding capability, can be based on the continuous optimization of subsequent translations in the previous language and enhance consistency and consistency in translation in the context of long-dialogues, multiple rounds of questions and answers, consecutive paragraphs, etc. Whether it is a lengthy translation of minutes of meetings, interviews, novel chapters or technical documents, the model captures and maintains context logic, avoiding problems of confusion, semantic fragmentation or inconsistent styles。
  • The third isTranslation capacity with formatBy following instructions, the hybrid translation model allows for the maintenance of pre- and post-translation format information to make translation results more accurate and practical。

In order to visualize the translation effects of the hybrid Tencent-HY-MT1.5-1.8B, the government shows a comparison with the results of the offline translation of the apple phone:

Technically, HY-MT1.5-1.8B is able to achieve the effects of large-scale models with small sizes and, thanks to the introduction of the On-Policy Division strategy, HY-MT1.5-7B, as Teacher, guides the 1.8B model in real time so that it avoids hard-back-standard answers to the dead, and allows small models to learn and enhance their capabilities by correcting deviations in predicting the distribution of the series。

The Fuzzy Translation Model was not only the first of 30 in the international machine translation competition, but also the first of the HuggingFace trend lists within a week of the first opening. The Mixed Translation Model has been used in many of the in-house business scenariosINCLUDING TELECONFERENCES, BUSINESS TWEETS, QQ BROWSERS, CLIENT TRANSLATION, ETC.

In order to facilitate their use by developers, this open-source model has been online in open-source communities such as Github and Huggingface, where deployment is supported by a number of platforms, such as Arm, Takatung, Intel and Musei. 1AI with open source addresses as follows:

  • Mixed website: https://hunyuan.tencent.com/modeSquare/home/list
  • Github Link: https://github.com/Tencent-Hunyuan/HY-MT
  • HuggingFace Link: https://huggingface.co/collections/tencent/hy-mt15
statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

According to the news, Manus Wuhan's team has largely moved away and the company will continue to operate in Singapore after it was acquired by Meta

2025-12-31 11:59:39

Information

AI AGE CREATED THE YOUNG BILLIONAIRES: ENTREPRENEURSHIP BECAME SO RICH IN LESS THAN THREE YEARS THAT MASK COULDN'T MATCH

2025-12-31 12:02:26

Search