{"id":50143,"date":"2026-02-12T12:14:13","date_gmt":"2026-02-12T04:14:13","guid":{"rendered":"https:\/\/www.1ai.net\/?p=50143"},"modified":"2026-02-12T12:14:23","modified_gmt":"2026-02-12T04:14:23","slug":"%e6%99%ba%e8%b0%b1%e4%b8%8a%e7%ba%bf%e5%85%a8%e6%96%b0%e6%a8%a1%e5%9e%8b-glm-5","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/50143.html","title":{"rendered":"GLM-5"},"content":{"rendered":"<p>February 12th news, just now<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%99%ba%e8%b0%b1\" title=\"[View articles tagged with [Smart Spectrum]]\" target=\"_blank\" >Zhipu<\/a>Officially online and open-source update<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%a8%a1%e5%9e%8b\" title=\"_Other Organiser\" target=\"_blank\" >Model<\/a> GM-5\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50144\" title=\"68c93506jtabvqa006d000ukm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/68c93506j00tabvqa006dd000u000kem.jpg\" alt=\"68c93506jtabvqa006d000ukm\" width=\"1080\" height=\"734\" \/><\/p>\n<p>GLM-5 is described as a product of the move towards Agentic Engineering: in the Coding and Agent capabilities, its access to open source SOTA shows that it is approaching Claude Opus 4.5 in the use of a real programming landscape, with a special focus on complex system engineering and long-range Agent missions\u3002<\/p>\n<p>GLM-5 ADOPTS A NEW BASE:<strong>PARAMETER SIZE EXPANDED FROM 355B (ACTIVATION 32B) TO 744B (ACTIVATION 40B) AND PRE-TRAINING DATA FROM 23T TO 28.5T<\/strong>; build a new \"Slime\" framework to support larger model sizes and more complex intensive learning tasks\u3002<\/p>\n<p>at the same time,<strong>GLM-5 and DeepSeek Sparse Attention<\/strong>The cost of model deployment has been significantly reduced while maintaining long text without loss\u3002<\/p>\n<p>In particular:<\/p>\n<p>The GLM-5 is the fourth largest in the world in the list of global authority Artificial Analysis\u3002<\/p>\n<p>GLM-5 achieves alignment of Claude Opus 4.5 in programming capacity, with open source model SOTA in industry-recognized mainstream baseline testing\u3002<\/p>\n<p>GLM-5 obtained a maximum of 77.8 and 56.2 for open source models in SWE-bench-Verified and Terminal Bench 2.0, respectively, with performance exceeding Gemini 3 Pro\u3002<\/p>\n<p>The GLM-5 achieved the highest performance in BrowneComp (online retrieval and information understanding), MCP-Atlas (large-scale end-to-end tool call) and 2-Bench (tool planning and implementation for automatic agency in complex scenarios)\u3002<\/p>\n<p>IT IS WORTH MENTIONING THAT THE GLM-5 HAS NOW COMPLETED ITS IN-DEPTH REASONING WITH CHINA'S CALCULATOR PLATFORMS, SUCH AS TSING, MOOR'S LINE, COLD WU, KUNLUNG, KUENCHI, MU SAKAI, TSUIHARA, AND SEA LIGHT. THROUGH BOTTOM ALGORITHM OPTIMIZATION AND ACCELERATION OF HARDWARE, THE GLM-5 HAS ACHIEVED HIGH-INPUT, LOW-DELAYED AND STABLE OPERATION IN THE NATIONAL CHIP CLUSTER\u3002<\/p>\n<p>As of this date, the GLM-5 synchronizes the open source with the ModeScope platform in Hugging Face, and the model weight follows MIT License. Meanwhile, GLM-5 has been incorporated into the GLM Working Plan Max package\u3002<\/p>\n<p>Online Experience<\/p>\n<p>Z.ai: https:\/\/chat.z.ai<\/p>\n<p>Ideas Statement APP\/Web Edition: https:\/\/chatglm.cn<\/p>\n<p>open source link<\/p>\n<p>GitHub: https:\/\/github.com\/zai-org\/GLM-5<\/p>\n<p>Hugging Face: https:\/\/huggingface.co\/zai-org\/GLM-5<\/p>","protected":false},"excerpt":{"rendered":"<p>12 February, just now, the latest model GLM-5 is officially online and open. GLM-5 is described as a product of the move towards Agentic Engineering: in the Coding and Agent capabilities, its access to open source SOTA shows that it is approaching Claude Opus 4.5 in the use of a real programming landscape, with a special focus on complex system engineering and long-range Agent missions. GLM-5 Introduction of an entirely new base: parameter scale from 355B (activation 32B) to 744B (activation 40B), pre-training data from 23T to 28.5T; development of an entirely new \"Slime\" framework to support larger model size and more complex enhancements<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[2680,1489],"collection":[],"class_list":["post-50143","post","type-post","status-publish","format-standard","hentry","category-news","tag-2680","tag-1489"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50143","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=50143"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50143\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=50143"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=50143"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=50143"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=50143"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}