Ali Open Flag Qwen3.5 released, top of the world's strongest open source model

February 17th.AliQwen3.5-Plus and Qwen3.5-397B-A17B new models are down on the Chat.qwen.ai page。

Ali Open Flag Qwen3.5 released, top of the world's strongest open source model

1AI learned from the official page that the Qwen3.5 Plus location is Qwen 3.5 Recent Large Language Model SeriesQwen3.5-397B-A17B Qwen3.5 Open Source Series flagship language modelI don't know. Two modelsBoth text and polymorphic tasks are supported.

According to Aliyun, Qwen3.5 achieved a complete overhaul of the bottom model structure, of which Qwen3.5-Plus, with a total parameter of 39.7 billion, activates only 17 billionQwen3-Max model with more than trillion parameters60%, WITH A SIGNIFICANT INCREASE IN REASONING EFFICIENCY, WITH THE LARGEST AMOUNT OF REASONING THROUGHPUT INCREASING TO 19 TIMES。

Qwen3.5 scored 87.8 in the MMLU-Pro cognitive assessmentBEYOND GPT-5.2; 88.4 POINTS IN THE DOCTORAL CHALLENGE GPQA TESTHigher than Claude 4.5; following IFBench at 76.5 minutesRefresh all model records;in the generic Agent assessment of BFCL-V4, search for Agent assessment of Brownsecomp, thousands of questions about 3.5 performanceBeyond Gemini 3 Pro.

Qwen3.5-397B-A17B performed well in the full range of benchmarking assessments, such as reasoning, programming, intelligent body abilities and multimodular understanding, and helped developers and enterprises to significantly increase productivity. Using an innovative hybrid structure, the model combines the linear Delta Networks with the rare hybrid (MoE) to achieve excellent reasoning efficiency:Parameters amount to 3970 billion, and only 17 billion parameters are activated at each forward transmissionOptimizing speed and cost while maintaining capacity. At the same time, language and dialect support was expanded from 119 to 201, providing wider availability and improved support to users worldwide。

Qwen3.5 Advances pre-training on three dimensions of competence, efficiency and interoperability:

  • Power:Train on a larger scale of visual-text text and enhance Chinese, multilingual, STEM and reasoning data, using more stringent filters, to achieve inter-generational parity: Qwen3.5-397B-A17B is equivalent to Qwen3-Max-Base, which has more than 1T。
  • Efficiency:Based on Qwen3-Next architecture - higher thinness MoE, Gated DeltaNet + Gated Attention mixed attention, stability optimization and multiple token predictions. Qwen3.5-397B-A17B decoded throughput of Qwen3-Max 8.6 times / 19.0 times the length of the 32k/256k context, and with comparable performance. Qwen3.5-397B-A17B decoded throughput is 3.5 times / 7.2 times greater than Qwen3-235B-A22B, respectively。
  • Universality:/ STEM / Video data for early text-visual integration and expansion achieves original multimodularity, which is better than Qwen3-VL at a similar scale. Multilingual coverage increased from 119 to 201 languages / dialects; 250,000 words (vs. 150,000) resulted in about 10–60% coding/ decoding efficiencies in most languages。

According to the presentation, Qwen3.5 provides a solid foundation for a universal digital intelligence based on efficient hybrid structures and original multimodular reasoning. The focus of the next phase will be to move from model size to system integration: to build a smart body with long-term memory across sessions, a personal interface for the real world, and a self-improvement mechanism, with the goal of a system that can operate autonomously and logically over time, upgrading the current task-based assistant to a sustainable and trusted partner。

statement:The content of the source of public various media platforms, if the inclusion of the content violates your rights and interests, please contact the mailbox, this site will be the first time to deal with.
Information

Mr. Altman declared Peter Steinberger, father of OpenClaw, to join OpenAI to promote the development of the next generation of personal intelligence

2026-2-17 22:06:35

Information

AI GENERATES A FLOOD OF CONTENT, A NEW WORD FOR "AI; DR" TO DEVALUE AI TRASH

2026-2-17 22:11:51

Search