{"id":36899,"date":"2025-06-06T19:19:01","date_gmt":"2025-06-06T11:19:01","guid":{"rendered":"https:\/\/www.1ai.net\/?p=36899"},"modified":"2025-06-06T19:19:01","modified_gmt":"2025-06-06T11:19:01","slug":"%e6%99%ba%e6%ba%90%e7%a0%94%e7%a9%b6%e9%99%a2%e5%8f%91%e5%b8%83%e6%82%9f%e7%95%8c%e7%b3%bb%e5%88%97%e5%a4%a7%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%90%ab%e5%85%a8%e7%90%83%e9%a6%96%e4%b8%aa","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/36899.html","title":{"rendered":"Wisdom Source Research Institute releases the \"Realm of Enlightenment\" series of large models, including the world's first native multimodal world model Emu3"},"content":{"rendered":"<p>June 6 news, Beijing Zhiyuan Artificial Intelligence Research Institute today released \"<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%82%9f%e7%95%8c\" title=\"[Sees articles with [sense] labels]\" target=\"_blank\" >comprehension<\/a>\"Series<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large models]]\" target=\"_blank\" >Large Model<\/a>These include the world's first native multimodal world model, \"Emu3\", and the world's first multimodal universal basic model for brain science, \"Emu3 Brain\u03bc\".<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-36900\" title=\"d2794b91j00sxfm2p00wld000m800lcp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/06\/d2794b91j00sxfm2p00wld000m800lcp.jpg\" alt=\"d2794b91j00sxfm2p00wld000m800lcp\" width=\"800\" height=\"768\" \/><\/p>\n<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%99%ba%e6%ba%90%e7%a0%94%e7%a9%b6%e9%99%a2\" title=\"[Sees articles with tags]\" target=\"_blank\" >AI Research Institute<\/a>The world's first MCP-enabled cross-ontology big-brain and small-brain collaboration framework, RoboOS 2.0, RoboBrain 2.0, a big model of the embodied brain, and OpenComplex2, an all-atom microscopic life model, were also released.<\/p>\n<p>Last October, Wisdom Source Research Institute released Emu3, a native multimodal world model that, as previously reported by 1AI, is based only on the next token prediction, and does not require a diffusion model or a combination of methods for understanding and generating three modalities of data: text, image, and video. The official claim<strong>Realize the great unification of image, text and video<\/strong>Emu3 supports end-to-end mapping of multimodal inputs and multimodal outputs, verifying the universality and advancement of autoregressive frameworks in the multimodal domain, and providing a powerful technical base for cross-modal interactions.<\/p>\n<p>Based on the underlying architecture of Emu3, SeeMicro Brain\u03bc unifies the tokenization of brain signals related to neuroscience and brain medicine, such as fMRI, EEG, two-photon, etc. By taking advantage of the multimodal alignment of pre-trained models, it can realize the multidirectional mapping of multimodal brain signals with modalities such as text and image, and realize the unified and universal modeling of cross-tasks, cross-modalities, and cross-individuals, so that multiple downstream tasks of neuroscience can be accomplished with a single model. downstream tasks in neuroscience with a single model.<\/p>","protected":false},"excerpt":{"rendered":"<p>June 6, 2011 - Beijing Zhiyuan Artificial Intelligence Research Institute (BJAII) released a series of \"Wujie\" models, including the world's first native multimodal world model \"Wujie\u30fbEmu3\", the world's first multimodal generalized basic model of brain science Emu3\", the world's first native multimodal world model, and \"Brain\u03bc\", the world's first multimodal universal basic model for brain science. Wisdom Institute also released Wukan RoboOS 2.0, the world's first MCP-enabled cross-body brain-size collaboration framework, Wukan RoboBrain 2.0, a large model of the embodied brain, and Wukan OpenComplex2, an all-atom microscopic model of life. Last October, Wisdom Source released Emu3, a native multimodal world model, which, as 1AI previously reported, is based only on the next tok<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[216,6849,1739],"collection":[],"class_list":["post-36899","post","type-post","status-publish","format-standard","hentry","category-news","tag-216","tag-6849","tag-1739"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36899","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=36899"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/36899\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=36899"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=36899"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=36899"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=36899"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}