Mimo-V2-Flash, challenges DeepSeek-V3.2

On December 16th, Mi Miomi MiMo-V2-Flash released the Xiaomi Open MoE model, total parameter 309B, active parameter 15B, designed for smart AI, focused on speed. According to official Mi, this is a MoE model, a self-studyed general parameter for extreme reasoning efficiency 309B (activation of 15B), which is accelerated by Hybrid Attention Architecture Innovation and multi-layer MTP reasoning, maintaining access to the Global Open Source Model Top 2 on multiple Agent Monitoring Benchmarks; code capacity exceeds all open source models and is two times higher than the closed-source model of the shoulderbar Claude 4.5 Sonnet, but the reasoning cost is only two times faster than its 2.5% generation。

Search