{"id":29112,"date":"2025-02-20T11:04:31","date_gmt":"2025-02-20T03:04:31","guid":{"rendered":"https:\/\/www.1ai.net\/?p=29112"},"modified":"2025-02-20T11:04:31","modified_gmt":"2025-02-20T03:04:31","slug":"%e8%81%94%e5%8f%91%e7%a7%91%e6%8e%a8%e5%87%ba%e4%b8%a4%e6%ac%be%e5%a4%9a%e6%a8%a1%e6%80%81%e8%bd%bb%e9%87%8f%e7%ba%a7-ai%e6%a8%a1%e5%9e%8b%ef%bc%9a%e4%b8%bb%e6%89%93%e7%b9%81%e4%bd%93%e4%b8%ad","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/29112.html","title":{"rendered":"MediaTek Launches Two Multimodal Lightweight AI Models: Focus on Traditional Chinese Processing Capabilities, Based on Meta Llama 3.2"},"content":{"rendered":"<p>February 19th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%81%94%e5%8f%91%e7%a7%91\" title=\"[Sees articles with [United Nations Development Programme] label]\" target=\"_blank\" >MediaTek<\/a>MediaTek Research has now released two lightweight multimodal models with support for Traditional Chinese, the Llama-Breeze2-3B model, which is claimed to work on cell phones, and the Llama-Breeze2-8B model for thin and light laptops.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29113\" title=\"ac503959j00sryoio001ad000v500eqp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/02\/ac503959j00sryoio001ad000v500eqp.jpg\" alt=\"ac503959j00sryoio001ad000v500eqp\" width=\"1121\" height=\"530\" \/><\/p>\n<p>1AI was informed that<strong>The series is based on the Meta Llama 3.2 language model.<\/strong>It also supports multimodal input and function calls, and can recognize images and call external tools.<\/p>\n<p>In terms of Traditional Chinese processing capability, the comparison provided by MediaTek shows that compared to the Llama 3.2 3B Instruct model, which has the same number of parameters, the Llama-Breeze2-3B is able to accurately list local famous night markets such as Shihlin Night Market, Raohe Street Night Market, and Luodong Night Market, while the Llama 3.2 3B Instruct model only correctly mentions Shihlin Night Market and also generates two non-existent night markets when composing a short article on night markets in Taipei. Instruct model only mentions Shilin Night Market correctly and generates two non-existent night markets.<\/p>\n<p>In addition, MediaTek has also developed an Android AI Assistant App based on Llama-Breeze2-3B, and at the same time launched an AI text-to-speech model, BreezyVoice, which claims to be able to generate realistic speech in real time with just 5 seconds of sample audio input.<\/p>","protected":false},"excerpt":{"rendered":"<p>On 19 February, according to the news, the United Nations Development Programme (UNDP) Innovation Base (MediaTek Research) has now released two lightweight multimodular models in support of the Chinese-language version of the Llama-Breeze2-3B model, which is described as operating on a mobile phone, and the Llama-Breeze2-8B model for light laptop computers. 1AI was informed that the series model is based on the Meta Llama 3.2 language model, which focuses on Chinese-language processing, while supporting multi-modular input and function calls, enabling the identification of images and the call to external tools. In terms of the Chinese-language processing capacity, the comparison provided by the United Nations Development Programme and Development Office shows that the Llama 3.2 B Index model is compared to the same parameter<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,1591],"collection":[],"class_list":["post-29112","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-1591"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29112","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=29112"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29112\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=29112"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=29112"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=29112"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=29112"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}