{"id":5193,"date":"2024-03-10T09:21:14","date_gmt":"2024-03-10T01:21:14","guid":{"rendered":"https:\/\/www.1ai.net\/?p=5193"},"modified":"2024-03-10T09:21:14","modified_gmt":"2024-03-10T01:21:14","slug":"70-%e4%ba%bf%e5%8f%82%e6%95%b0%ef%bc%8c%e8%81%94%e5%8f%91%e7%a7%91%e6%8e%a8%e5%87%ba-mr-breeze-7b-%e6%a8%a1%e5%9e%8b%ef%bc%9a%e6%93%85%e9%95%bf%e6%b4%9e%e5%af%9f%e6%95%b0%e6%8d%ae%e3%80%81%e6%94%af","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/5193.html","title":{"rendered":"MediaTek launches MR Breeze-7B model with 7 billion parameters: good at data insight and supports bilingual interaction"},"content":{"rendered":"<p data-vmark=\"97ae\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%81%94%e5%8f%91%e7%a7%91\" title=\"[Sees articles with [United Nations Development Programme] label]\" target=\"_blank\" >MediaTek<\/a>MediaTek Research, a research institute of MediaTek, recently released an announcement showing<strong>Launched a new model called MR Breeze-7B<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with [open-language model] labels]\" target=\"_blank\" >Open Source Large Language Model<\/a>\uff08<a href=\"https:\/\/www.1ai.net\/en\/tag\/llm\" title=\"[SEE ARTICLES WITH [LLM] LABELS]\" target=\"_blank\" >LLM<\/a>).<\/strong><\/p>\n<p data-vmark=\"aba1\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5194\" title=\"76c7b58a-726d-410c-8c02-d066cccb4033\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/76c7b58a-726d-410c-8c02-d066cccb4033.jpg\" alt=\"76c7b58a-726d-410c-8c02-d066cccb4033\" width=\"1200\" height=\"648\" \/><\/p>\n<p data-vmark=\"0798\">This open source model excels at processing Traditional Chinese and English, has a total of 7 billion parameters, and is designed based on the acclaimed Mistral model.<\/p>\n<p data-vmark=\"8549\">Compared to its predecessor, the BLOOM-3B, the MR Breeze-7B absorbs a remarkable 20 times more knowledge, allowing it to navigate the intricate linguistic and cultural nuances of Traditional Chinese with greater precision.<\/p>\n<p data-vmark=\"819c\">MR Breeze-7B outperforms similar products such as Mistral and Llama in processing speed. It cuts the time and memory required for complex Traditional Chinese inference in half, providing a more seamless experience for users.<\/p>\n<p data-vmark=\"3500\">Compared with other 7B English and Chinese language models, MR Breeze-7B can respond quickly in both languages more fluently and accurately, and can keenly grasp the context to make relevant and coherent responses.<\/p>\n<p data-vmark=\"4ab1\">Additionally, MR Breeze-7B excels at parsing and generating tabular content, which is a game changer for data-driven tasks such as analysis, financial reporting, and complex scheduling, and is indispensable for businesses that process large amounts of structured data.<\/p>","protected":false},"excerpt":{"rendered":"<p>MediaTek Research, a research organization of MediaTek, recently announced the launch of a new open source Large Language Model (LLM) called MR Breeze-7B. The open-source model, which specializes in Traditional Chinese and English, has a total of 7 billion parameters and is based on the acclaimed Mistral model. Compared to its predecessor, the BLOOM-3B, the MR Breeze-7B incorporates a significant 20-fold increase in knowledge, allowing it to navigate the intricate linguistic and cultural nuances of Traditional Chinese with greater precision. MR Breeze-7B's processing speed exceeds that of similar products such as Mistral and Llama.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[473,471,1591],"collection":[],"class_list":["post-5193","post","type-post","status-publish","format-standard","hentry","category-news","tag-llm","tag-471","tag-1591"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5193","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=5193"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5193\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=5193"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=5193"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=5193"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=5193"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}