Yesterday.Ali TongyiThe team announces the official release of the upgraded version of the Qwen3-235B-A22B Thinking model: Qwen3-235B-A22B-Thinking-2507.

According to the introduction, the new open source Qwen3-235B-A22B-Thinking-2507 has achieved a huge leap in inference performance and generalization capability, which can be compared to top closed source models such as Google Gemini-2.5 pro, OpenAI o4-mini, and has set the world's best performance of open source model SOTA:
In core competencies such as programming (LiveCodeBench) and math (AIME25), Qwen3 inference modelAchieving another breakthrough in inference performance;
The Qwen3 inference model has also made significant progress in generalized abilities such as knowledge (SuperGPQA), creative writing ability (WritingBench), human preference alignment (Arena-Hard v2), and multilingual ability (MultilF);
It is worth mentioning that the new model supports 256K long text comprehension.
Qwen3-235B-A22B-Thinking-2507 is now open-sourced in the Magic Hitch community, Hugging Face, and utilizes the very loose Apache 2.0 open source protocol. In addition, the new model is now available on Qwen Chat.
Magic Match Community:https://www.modelscope.cn/models/Qwen/Qwen3-235B-A22B-Thinking-2507
HuggingFace:https://huggingface.co/Qwen/Qwen3-235B-A22B-Thinking-2507