{"id":36389,"date":"2025-05-29T11:56:30","date_gmt":"2025-05-29T03:56:30","guid":{"rendered":"https:\/\/www.1ai.net\/?p=36389"},"modified":"2025-05-29T11:56:30","modified_gmt":"2025-05-29T03:56:30","slug":"%e8%9a%82%e8%9a%81%e9%9b%86%e5%9b%a2%e5%ae%a3%e5%b8%83%e6%ad%a3%e5%bc%8f%e5%bc%80%e6%ba%90%e7%bb%9f%e4%b8%80%e5%a4%9a%e6%a8%a1%e6%80%81%e5%a4%a7%e6%a8%a1%e5%9e%8bming-lite-omni%ef%bc%8c%e7%99%be","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/36389.html","title":{"rendered":"Ant Group Announces Official Open Source Unified Multimodal Large Model Ming-lite-omni, Bering Releases New Multimodal Large Model"},"content":{"rendered":"<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-36390\" title=\"cf0c029fj00sx086u00gxd000u000fqm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/05\/cf0c029fj00sx086u00gxd000u000fqm.jpg\" alt=\"cf0c029fj00sx086u00gxd000u000fqm\" width=\"1080\" height=\"566\" \/><\/p>\n<p>On May 28, the ants of Paradise<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large models]]\" target=\"_blank\" >Large Model<\/a>(Ling) team today officially<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>standardize<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%9a%e6%a8%a1%e6%80%81%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with [Multimodal Large Model] labels]\" target=\"_blank\" >Multimodal large model<\/a> Ming-lite-omni.<\/p>\n<p>Ming-lite-omni is described as an all-modal model based on the MoE architecture constructed by Ling-lite, with a total parameter of 22B and an activation parameter of 3B. it supports \"cross-modal fusion and unification\" and \"comprehension and generation unification\".<\/p>\n<p>Ming-lite-omni performs as well as or better than the 10B leading multimodal macromodels in several comprehension and generative tests, with only 3B parameter activations. Officially, this is the first open source model known to be able to match GPT-4o in terms of modal support.<\/p>\n<p>In addition, the Ant-Bellion big model team will continue to optimize the effect of Ming-lite-omni on full-modal comprehension and generation tasks, and enhance the multimodal complex reasoning capability of Ming-lite-omni; it will also train a larger-size full-modal model, Ming-plus-omni, in order to further solve more highly specialized or domain-specific complex interaction problems.<\/p>\n<p>Ming-lite-omni Current model weights and inference code is open source.<\/p>\n<p>Github: https:\/\/github.com\/inclusionAI\/Ming\/tree\/main\/Ming-omni<\/p>\n<p>HuggingFace: https:\/\/huggingface.co\/inclusionAI\/Ming-Lite-Omni<\/p>\n<p>Model Scope: https:\/\/modelscope.cn\/models\/inclusionAI\/Ming-Lite-Omni<\/p>\n<p>Project Page: https:\/\/lucaria-academy.github.io\/Ming-Omni\/<\/p>","protected":false},"excerpt":{"rendered":"<p>On May 28th, the Ant-Ling Big Model (Ling) team officially open-sourced the unified multimodal big model Ming-lite-omni. According to the introduction, Ming-lite-omni is an all-modal model based on the MoE architecture constructed by Ling-lite, with a total parameter of 22B and an activation parameter of 3B, which supports \"cross-modal fusion and unification\", \"understanding and unification\", and \"generation unification\". It supports \"cross-modal fusion and unification\" and \"comprehension and generation unification\". In several understanding and generation evaluations, Ming-lite-omni's performance is comparable to or better than the 10B leading multimodal model with only 3B parameter activation. Officially, it is the first open source model known to be comparable to GPT-4o in terms of modal support. In addition<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[602,216,219,1030],"collection":[],"class_list":["post-36389","post","type-post","status-publish","format-standard","hentry","category-news","tag-602","tag-216","tag-219","tag-1030"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36389","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=36389"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36389\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=36389"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=36389"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=36389"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=36389"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}