{"id":50228,"date":"2026-02-17T22:10:00","date_gmt":"2026-02-17T14:10:00","guid":{"rendered":"https:\/\/www.1ai.net\/?p=50228"},"modified":"2026-02-17T22:10:00","modified_gmt":"2026-02-17T14:10:00","slug":"%e9%98%bf%e9%87%8c%e5%bc%80%e6%ba%90%e6%97%97%e8%88%b0qwen3-5-%e5%8f%91%e5%b8%83%ef%bc%8c%e7%99%bb%e9%a1%b6%e5%85%a8%e7%90%83%e6%9c%80%e5%bc%ba%e5%bc%80%e6%ba%90%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/50228.html","title":{"rendered":"Ali Open Flag Qwen3.5 released, top of the world's strongest open source model"},"content":{"rendered":"<p>February 17th.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c\" title=\"[View articles tagged with [Ali]]\" target=\"_blank\" >Ali<\/a>Qwen3.5-Plus and Qwen3.5-397B-A17B new models are down on the Chat.qwen.ai page\u3002<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-50229\" title=\"1a9feff2j00talwna000td000o000dop\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2026\/02\/1a9feff2j00talwna000td000o000dop.jpg\" alt=\"1a9feff2j00talwna000td000o000dop\" width=\"864\" height=\"492\" \/><\/p>\n<p>1AI learned from the official page that the Qwen3.5 Plus location is\u00a0<strong>Qwen 3.5 Recent Large Language Model Series<\/strong>Qwen3.5-397B-A17B\u00a0<strong>Qwen3.5 Open Source Series flagship language model<\/strong>I don't know. Two models<strong>Both text and polymorphic tasks are supported<\/strong>.<\/p>\n<p>According to Aliyun, Qwen3.5 achieved a complete overhaul of the bottom model structure, of which Qwen3.5-Plus, with a total parameter of 39.7 billion, activates only 17 billion<strong>Qwen3-Max model with more than trillion parameters<\/strong>60%, WITH A SIGNIFICANT INCREASE IN REASONING EFFICIENCY, WITH THE LARGEST AMOUNT OF REASONING THROUGHPUT INCREASING TO 19 TIMES\u3002<\/p>\n<p>Qwen3.5 scored 87.8 in the MMLU-Pro cognitive assessment<strong>BEYOND GPT-5.2<\/strong>; 88.4 POINTS IN THE DOCTORAL CHALLENGE GPQA TEST<strong>Higher than Claude 4.5<\/strong>; following IFBench at 76.5 minutes<strong>Refresh all model records<\/strong>;in the generic Agent assessment of BFCL-V4, search for Agent assessment of Brownsecomp, thousands of questions about 3.5 performance<strong>Beyond Gemini 3 Pro<\/strong>.<\/p>\n<p>Qwen3.5-397B-A17B performed well in the full range of benchmarking assessments, such as reasoning, programming, intelligent body abilities and multimodular understanding, and helped developers and enterprises to significantly increase productivity. Using an innovative hybrid structure, the model combines the linear Delta Networks with the rare hybrid (MoE) to achieve excellent reasoning efficiency:<strong>Parameters amount to 3970 billion, and only 17 billion parameters are activated at each forward transmission<\/strong>Optimizing speed and cost while maintaining capacity. At the same time, language and dialect support was expanded from 119 to 201, providing wider availability and improved support to users worldwide\u3002<\/p>\n<p>Qwen3.5 Advances pre-training on three dimensions of competence, efficiency and interoperability:<\/p>\n<ul>\n<li><strong>Power:<\/strong>Train on a larger scale of visual-text text and enhance Chinese, multilingual, STEM and reasoning data, using more stringent filters, to achieve inter-generational parity: Qwen3.5-397B-A17B is equivalent to Qwen3-Max-Base, which has more than 1T\u3002<\/li>\n<li><strong>Efficiency:<\/strong>Based on Qwen3-Next architecture - higher thinness MoE, Gated DeltaNet + Gated Attention mixed attention, stability optimization and multiple token predictions. Qwen3.5-397B-A17B decoded throughput of Qwen3-Max 8.6 times \/ 19.0 times the length of the 32k\/256k context, and with comparable performance. Qwen3.5-397B-A17B decoded throughput is 3.5 times \/ 7.2 times greater than Qwen3-235B-A22B, respectively\u3002<\/li>\n<li><strong>Universality:<\/strong>\/ STEM \/ Video data for early text-visual integration and expansion achieves original multimodularity, which is better than Qwen3-VL at a similar scale. Multilingual coverage increased from 119 to 201 languages \/ dialects; 250,000 words (vs. 150,000) resulted in about 10\u201360% coding\/ decoding efficiencies in most languages\u3002<\/li>\n<\/ul>\n<p>According to the presentation, Qwen3.5 provides a solid foundation for a universal digital intelligence based on efficient hybrid structures and original multimodular reasoning. The focus of the next phase will be to move from model size to system integration: to build a smart body with long-term memory across sessions, a personal interface for the real world, and a self-improvement mechanism, with the goal of a system that can operate autonomously and logically over time, upgrading the current task-based assistant to a sustainable and trusted partner\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>On February 17, Ali, on the Chat.qwen.ai page, went low on line with two new models, Qwen3.5-Plus and Qwen3.5-397B-A17B. 1AI learned from the official page that Qwen3.5 Plus is located in the latest large language model of the Qwen 3.5 series, and Qwen 3.5-397B-A17B is located in the Qwen3.5 large language model of the Open Source series. Both models support text and multi-mode tasks. According to Aliyun, Qwen3.5 achieved a complete overhaul of the bottom model structure, of which Qwen3.5-Plus, with a total parameter of 39.7 billion, activates only 17 billion<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[862,1759],"collection":[],"class_list":["post-50228","post","type-post","status-publish","format-standard","hentry","category-news","tag-862","tag-1759"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50228","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=50228"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/50228\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=50228"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=50228"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=50228"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=50228"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}