Message of April 23rd, AliyunThousand Questions on TongyiThe team announced yesterdayOpen SourceModel family greets new members — Qwen 3.6-27B. It's a dense section of 27 billion parametersMultimodal ModelIt is also the model specification of the highest community voices。

This was followed by the release of Qwen3.6-Plus and Qwen3.6-35B-A3B, and this open-source version of 27B, while maintaining the advantages of a dense architecture, has resulted in a comprehensive upgrading of intelligent body programming and multimodular reasoning。
According to the official presentation, Qwen3.6-27B, which supports multi-modular and non-thinking models, has achieved a flag-size performance in intelligent body programming, which goes beyond the former Open Source Flag Qwen 3.5-397B-A17B - a total parameter of 39.7 billion and a activated parameter of 17 billion MoE (mixed experts) model. As a dense structure, Qwen 3.6-27B deployment without MoE route is the ideal option for developers to acquire top-programming capabilities on a practical and broadly deployable scale。
In the natural language and programming baseline test, Qwen3.6-27B fully exceeded its 15-fold Qwen3.5-397B-A17B on all major programming benchmarks with 27 billion parameters alone。
Specifically, SWE-bench Verified scores 77.2 (former 76.2), SWE-bench Pro score 53.5 (former 509), Terminal-Bench 2.0 score 59.3 (former 52.5), SkillsBench score 48.2 (former 30.0)。
On the reasoning mission, Qwen3.6-27B achieved 87.8 on GPQA Diamond, which is comparable to a model of several times its size。
In terms of visual language, the model is home-grown in multi-modular form, capable of processing mixed input of images, videos and text, supporting tasks such as visual reasoning, document understanding and visual questions and answers, and is consistent with Qwen3.6-35B-A3B。
1AI cautions that the open source weights for Qwen3.6-27B are available on the Hugging Face and ModelScope platforms and can be downloaded by developers for local deployment. At the same time, users can have a direct interactive dialogue on Qwen Studio (chat.qwen.ai)。
In addition, the ARI platform is about to support the call to the model through API and to retain the "preserve_thinking" function, so that all pre-sequent thinking can be retained in messages and officially recommended for use in intelligent missions。
The model could also be seamlessly integrated into popular third-party programming assistants, including OpenClaw, Claude Code and Qwen Code, thus simplifying the development process and achieving an efficient and context-sensitive coding experience。