{"id":38762,"date":"2025-07-02T15:47:38","date_gmt":"2025-07-02T07:47:38","guid":{"rendered":"https:\/\/www.1ai.net\/?p=38762"},"modified":"2025-07-02T15:47:38","modified_gmt":"2025-07-02T07:47:38","slug":"%e6%99%ba%e8%b0%b1%e8%8e%b7%e6%b5%a6%e4%b8%9c%e5%88%9b%e6%8a%95%e3%80%81%e5%bc%a0%e6%b1%9f%e9%9b%86%e5%9b%a2-10-%e4%ba%bf%e5%85%83%e6%88%98%e7%95%a5%e6%8a%95%e8%b5%84%ef%bc%8c%e5%bc%80%e6%ba%90","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/38762.html","title":{"rendered":"Wisdom Spectrum Receives 1 Billion RMB Strategic Investment from Pudong Venture Capital and Zhangjiang Group, Releases New Generation of Generalized Visual Language Model GLM-4.1V-Thinking in Open Source"},"content":{"rendered":"<p>July 2 - This morning.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%99%ba%e8%b0%b1\" title=\"[View articles tagged with [Smart Spectrum]]\" target=\"_blank\" >Zhipu<\/a>Open Platform Industry Ecological Conference held in Shanghai Pudong Zhangjiang Science Hall, open source released a new generation of universal<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%a7%86%e8%a7%89%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [visual language modeling]]\" target=\"_blank\" >visual language model<\/a> GLM-4.1V-Thinking.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-38763\" title=\"c1b6e531j00syrhmd0097d000v900lyp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/07\/c1b6e531j00syrhmd0097d000v900lyp.jpg\" alt=\"c1b6e531j00syrhmd0097d000v900lyp\" width=\"1125\" height=\"790\" \/><\/p>\n<p>At the Wisdom Spectrum Open Platform Industry Ecological Conference, Wisdom Spectrum announced that Pudong Venture Capital Group and Zhangjiang Group have invested a total of 1 billion RMB in Wisdom Spectrum.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%88%98%e7%95%a5%e6%8a%95%e8%b5%84\" title=\"[Sees articles with [strategic investment] labels]\" target=\"_blank\" >Strategic investments<\/a>and recently completed the first delivery. At the same time, the three parties have also launched a collaboration to build a new infrastructure for artificial intelligence.<\/p>\n<p>Wisdom Spectrum officially releases and open-sources visual language grand model today\u00a0<strong>GLM-4.1V-Thinking<\/strong>, which is a general-purpose inference-based macromodel supporting multimodal inputs such as images, videos, documents, etc., designed for complex cognitive tasks.<\/p>\n<p>1AI was officially informed that it introduces \"Chain-of-Thought Reasoning\" on the basis of GLM-4V architecture and adopts \"Reinforced Learning Strategies for Course Sampling (RLCS)\" to systematically improve the cross-modal causal reasoning capability and stability of the model. The RLCS systematically improves the cross-modal causal reasoning ability and stability of the model.<\/p>\n<p>Its lightweight version\u00a0<strong>GLM-4.1V-9B-Thinking<\/strong>\u00a0The model parameters are controlled at the 10B level, realizing performance breakthroughs while taking into account deployment efficiency. In 28 authoritative evaluations, including MMStar, MMMU-Pro, ChartQAPro, OSWorld, etc., the model has achieved the best results in 23 out of 10B level models, and in 18 of them, it has even equaled or surpassed Qwen-2.5-VL, which has a parameter count as high as 72B, thus fully demonstrating the extreme performance potential of small-size models.<\/p>\n<p>According to the official description, the model particularly excels in the following tasks, demonstrating a high degree of versatility and robustness:<\/p>\n<ul>\n<li>Image General: accurately recognize and comprehensively analyze image and text information;<\/li>\n<li>Math &amp; Science: Support for complex problem solving, multi-step deduction and formula understanding;<\/li>\n<li>Video Understanding (Video): Ability to analyze timing and event logic modeling;<\/li>\n<li>GUI and Web Intelligentsia tasks (UI2Code, Agent): understanding interface structure, assisting automation;<\/li>\n<li>Visual Anchoring and Entity Localization (Grounding): precise alignment of language and image areas to enhance human-computer interaction controllability.<\/li>\n<\/ul>\n<p>Currently, GLM-4.1V-9B-Thinking has been open-sourced in Hugging Face and Magic Matching community. It contains two models, namely, GLM-4.1V-9B-Base base model, which is hoped to help more researchers to explore the ability of visual language model boundary work; and GLM-4.1V-9B-Thinking, which has the ability of in-depth thinking and reasoning, and normal use and experience, are all of this model.<\/p>","protected":false},"excerpt":{"rendered":"<p>In the news of 2 July, this morning, the Open-Specific Platform Industrial Ecology Conference was held at the Changjiang Science Hall in Shanghai to launch a new generation of GLM-4.1V-Thinking. At the Industrial Ecology Congress of the Open Specular Platform, the Spechu announced a strategic investment of 1 billion yuan in the Spectrograph and Zhang Gang Group, which recently completed the first transaction. At the same time, the three parties have initiated a partnership to build new artificial intelligence infrastructure. Today, the brain spectra was officially published and open-source visual language large model GLM-4.1V-Thinking, a large generic reasoning model that supports multi-modular input, such as images, videos and documents, designed for complex cognitive tasks. 1AI was officially informed that it was in GLM-4<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[3656,2680,4981],"collection":[],"class_list":["post-38762","post","type-post","status-publish","format-standard","hentry","category-news","tag-3656","tag-2680","tag-4981"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/38762","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=38762"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/38762\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=38762"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=38762"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=38762"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=38762"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}