{"id":51288,"date":"2026-03-19T12:17:02","date_gmt":"2026-03-19T04:17:02","guid":{"rendered":"https:\/\/www.1ai.net\/?p=51288"},"modified":"2026-03-19T12:17:02","modified_gmt":"2026-03-19T04:17:02","slug":"%e5%b0%8f%e7%b1%b3%e6%b7%b1%e5%a4%9c%e4%b8%8a%e7%ba%bf%e4%b8%89%e5%a4%a7%e8%87%aa%e7%a0%94-mimo-v2-%e7%b3%bb%e5%88%97%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/51288.html","title":{"rendered":"Three self-research MiMo-V2 series models for rice at night"},"content":{"rendered":"<p>March 19, early this morning<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%b0%8f%e7%b1%b3\" title=\"[View articles tagged with [Xiaomi]]\" target=\"_blank\" >Millet<\/a>The Mimo V2 series was published in a centralized series<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Model<\/a>MiMo-V2-Pro, Agent Base Mimo-V2-Omni, and Mimo-V2-TTS\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-51289\" title=\"2d88ec6fj00tc4p6x00end000u0o9p\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/03\/2d88ec6fj00tc4p6x00end000u000o9p.jpg\" alt=\"2d88ec6fj00tc4p6x00end000u0o9p\" width=\"1080\" height=\"873\" \/><\/p>\n<p>MiMo-V2-Pro:<\/p>\n<p>The total amount of the parameter exceeds 1T, the activated parameter is 42B, using the Hybrid Attention structure, the mixed ratio was raised from 5:1 to 7:1 in the previous generation, and support was provided to the 1M super-long context window<\/p>\n<p>In the framework of intelligent bodies such as OpenClaw and Claude Code, complex workflow programming, long-range planning and precision tools can be done without manual intervention<\/p>\n<p>In the context of Coding capacity, the inner rice engineer assesses that it is close to Claude Opus 4.6 and has enhanced system design and mission planning capabilities<\/p>\n<p>API pricing is about 1\/5,256K in the context of Claude peer model $1\/ million token, output $3\/ million token; 1M enter $2\/ million token, output $6\/ million token in the context\u3002<\/p>\n<p>MiMo-V2-Omni is the first full-modular model for the integration of perception and action at the base level of millimetres, integrating text, visual and voice input, supporting the 256K context, priced as input $0.4\/ million token, output $2\/ million token\u3002<\/p>\n<p>Officially, it is claimed that its combined audio understanding is going beyond Gemini 3 Pro and image is going beyond Claude Opus 4.6; in the practical application scenario, MiMo-V2-Omni can automate browser operations in conjunction with the OpenClaw framework, including cross-platform pricedown, short video production and distribution end-to-end tasks\u3002<\/p>\n<p>The MiMo-V2-TTS speech synthesis model supports multi-particle control of emotions from the whole style to the rest of the sentence, allows for the natural processing of signals in the form of markers, words, etc., and supports a variety of dialects, such as Northeast, Sichuan and Sichuan, as well as role-playing styles and sound synthesis\u3002<\/p>\n<p>All three models are connected to the Kyinsan WFS, MiMo-V2-Pro Synchronized Microphone Client Agent product miclaw and milli browser\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On March 19, in the early morning hours of this morning, Mimo V2 focused on three models: the flagship language base MiMo-V2-Pro, the full-mod Agent base MiMo-V2-Omni, and the voice synthesis model MiMo-V2-TTS. MiMo-V2-Pro: Total parameter mass exceeding 1T, activated parameter 42B, using the Hybrid Attention structure, with a mixed ratio of 5:1 up to 7:1 in the previous generation, and support for a 1M super-long context window<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1114,1489],"collection":[],"class_list":["post-51288","post","type-post","status-publish","format-standard","hentry","category-news","tag-1114","tag-1489"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/51288","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=51288"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/51288\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=51288"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=51288"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=51288"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=51288"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}