{"id":40150,"date":"2025-07-26T14:48:12","date_gmt":"2025-07-26T06:48:12","guid":{"rendered":"https:\/\/www.1ai.net\/?p=40150"},"modified":"2025-07-26T14:48:12","modified_gmt":"2025-07-26T06:48:12","slug":"%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89%e6%96%b0%e6%8e%a8%e7%90%86%e6%a8%a1%e5%9e%8b%e5%8f%91%e5%b8%83%ef%bc%8c%e6%80%a7%e8%83%bd%e6%af%94%e8%82%a9%e9%97%ad%e6%ba%90%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/40150.html","title":{"rendered":"Ali Tongyi's new inference model is released, with performance comparable to closed-source models"},"content":{"rendered":"<p>Yesterday.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89\" title=\"[Sees articles with [Ariton] labels]\" target=\"_blank\" >Ali Tongyi<\/a>The team announces the official release of the upgraded version of the Qwen3-235B-A22B Thinking model: Qwen3-235B-A22B-Thinking-2507.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-40151\" title=\"768ac14cj00szzut90018d000u000gwm\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/07\/768ac14cj00szzut90018d000u000gwm.jpg\" alt=\"768ac14cj00szzut90018d000u000gwm\" width=\"1080\" height=\"608\" \/><\/p>\n<p>&nbsp;<\/p>\n<p>According to the introduction, the new open source Qwen3-235B-A22B-Thinking-2507 has achieved a huge leap in inference performance and generalization capability, which can be compared to top closed source models such as Google Gemini-2.5 pro, OpenAI o4-mini, and has set the world's best performance of open source model SOTA:<\/p>\n<p>In core competencies such as programming (LiveCodeBench) and math (AIME25), Qwen3 <a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%8e%a8%e7%90%86%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [inference model]]\" target=\"_blank\" >inference model<\/a>Achieving another breakthrough in inference performance;<\/p>\n<p>The Qwen3 inference model has also made significant progress in generalized abilities such as knowledge (SuperGPQA), creative writing ability (WritingBench), human preference alignment (Arena-Hard v2), and multilingual ability (MultilF);<\/p>\n<p>It is worth mentioning that the new model supports 256K long text comprehension.<\/p>\n<p>Qwen3-235B-A22B-Thinking-2507 is now open-sourced in the Magic Hitch community, Hugging Face, and utilizes the very loose Apache 2.0 open source protocol. In addition, the new model is now available on Qwen Chat.<\/p>\n<p><strong>Magic Match Community:<\/strong>https:\/\/www.modelscope.cn\/models\/Qwen\/Qwen3-235B-A22B-Thinking-2507<\/p>\n<p><strong>HuggingFace:<\/strong>https:\/\/huggingface.co\/Qwen\/Qwen3-235B-A22B-Thinking-2507<\/p>","protected":false},"excerpt":{"rendered":"<p>Yesterday, the Alithong team announced the official launch of the Qwen3-235B-A22B thought model upgrade: Qwen3-235B-A22B-Thinking-2507. Qwen3-235B-A22B-Thinking-2507 of the new open source, it was described as having made a huge leap in reasoning performance and universal capability, comparing top closed-source models such as Google Gemini-2.5pro, OpenAI o4-mini and creating a global open-source model SOTA best performance: Qwen3 was achieved in core capabilities such as programming (LiveCodeBench) and mathematics (AIME25)<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5023,3390],"collection":[],"class_list":["post-40150","post","type-post","status-publish","format-standard","hentry","category-news","tag-5023","tag-3390"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/40150","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=40150"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/40150\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=40150"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=40150"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=40150"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=40150"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}