{"id":34380,"date":"2025-04-30T11:33:54","date_gmt":"2025-04-30T03:33:54","guid":{"rendered":"https:\/\/www.1ai.net\/?p=34380"},"modified":"2025-04-30T11:33:54","modified_gmt":"2025-04-30T03:33:54","slug":"%e5%b0%8f%e7%b1%b3%e5%bc%80%e6%ba%90xiaomi-mimo%e5%a4%a7%e6%a8%a1%e5%9e%8b%ef%bc%9a%e4%b8%ba%e6%8e%a8%e7%90%86%e8%80%8c%e7%94%9f%ef%bc%8c%e4%bb%a5-7b-%e5%8f%82%e6%95%b0%e8%b6%85","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/34380.html","title":{"rendered":"Xiaomi open-sources \"Xiaomi MiMo\" large model: built for inference, surpasses OpenAI o1-mini with 7B parameters"},"content":{"rendered":"<p>April 30th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%b0%8f%e7%b1%b3\" title=\"[View articles tagged with [Xiaomi]]\" target=\"_blank\" >Millet<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large models]]\" target=\"_blank\" >Large Model<\/a>The team has passed the \"<a href=\"https:\/\/www.1ai.net\/en\/tag\/xiaomi-mimo\" title=\"_Other Organiser\" target=\"_blank\" >Xiaomi MiMo<\/a>\"Public announced that today, Xiaomi<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>Xiaomi MiMo, the first large model \"born for reasoning\", links pre-training to post-training to comprehensively improve reasoning ability. According to the introduction, MiMo is the initial attempt from the newly established \"Xiaomi Big Model Core Team\".<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-34381\" title=\"74a08ee0j00svihux003sd000u000izp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/04\/74a08ee0j00svihux003sd000u000izp.jpg\" alt=\"74a08ee0j00svihux003sd000u000izp\" width=\"1080\" height=\"683\" \/><\/p>\n<p>On the Mathematical Reasoning (AIME 24-25) and Code Competition (LiveCodeBench v5) public evaluation sets, MiMo was able to achieve the same results using only the<strong>\u00a0<\/strong><strong>Parameter scale for 7B<\/strong>The model is a closed-source inference model that outperforms OpenAI's closed-source model.<strong>\u00a0<\/strong><strong>o1-mini<\/strong><strong>\u00a0<\/strong>and Ali Qwen larger scale open source inference models\u00a0<strong>QwQ-32B-Preview<\/strong>.<\/p>\n<p>Officials said,<strong>The improvement in MiMo inference capability is driven by a combination of innovations at multiple levels of data and algorithms in the pre-training and post-training phases, including:<\/strong><\/p>\n<ul>\n<li><strong>pre-training<\/strong>: the core is for the model to have seen more inference patterns<\/li>\n<li>Data: focus on mining rich inference corpus and synthesize about 200B tokens of inference data.<\/li>\n<li>Training: Three phases of training were conducted, progressively increasing the difficulty of the training, totaling 25T tokens.<\/li>\n<\/ul>\n<ul>\n<li><strong>after-training<\/strong>: At the core are efficient and stable reinforcement learning algorithms and frameworks<\/li>\n<li>ALGORITHM: Test Difficulty Driven Reward is proposed to alleviate the reward sparsity problem in difficult algorithmic problems, and Easy Data Re-Sampling strategy is introduced to stabilize RL training.<\/li>\n<li>Framework: Designed a Seamless Rollout system that accelerates RL training by 2.29x and verification by 1.96x.<\/li>\n<\/ul>\n<p data-vmark=\"2901\">1AI Attached open source address:<\/p>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"cf98\"><strong>Hugging Face:<\/strong><span class=\"link-text-start-with-http\">https:\/\/huggingface.co\/XiaomiMiMo<\/span><\/p>\n<\/li>\n<li>\n<p data-vmark=\"7e61\"><strong>Technical report:<\/strong><span class=\"link-text-start-with-http\">https:\/\/github.com\/XiaomiMiMo\/MiMo\/blob\/main\/MiMo-7B-Technical-Report.pdf<\/span><\/p>\n<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>April 30 news, Xiaomi big model team announced through the \"Xiaomi MiMo\" public number, today, Xiaomi open source the first \"born for reasoning\" big model Xiaomi MiMo, linkage pre-training to post-training, comprehensively improve the reasoning of the The first model of Xiaomi MiMo, linking pre-training to post-training to improve reasoning in all aspects. According to the introduction, MiMo is the initial attempt from the newly established \"Xiaomi Big Model Core Team\". In the open evaluation sets of mathematical reasoning (AIME 24-25) and code competitions (LiveCodeBench v5), MiMo surpassed OpenAI's closed-source inference model o1-mini and Ali Qwen's larger open-source inference model QwQ-32B-Preview with only 7B parameter size.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[6497,216,1114,219],"collection":[],"class_list":["post-34380","post","type-post","status-publish","format-standard","hentry","category-news","tag-xiaomi-mimo","tag-216","tag-1114","tag-219"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/34380","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=34380"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/34380\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=34380"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=34380"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=34380"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=34380"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}