{"id":40525,"date":"2025-07-31T11:43:26","date_gmt":"2025-07-31T03:43:26","guid":{"rendered":"https:\/\/www.1ai.net\/?p=40525"},"modified":"2025-07-31T11:43:26","modified_gmt":"2025-07-31T03:43:26","slug":"%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae%e6%8e%a8%e5%87%ba%e5%85%a8%e6%96%b0%e6%8e%a8%e7%90%86%e6%a8%a1%e5%9e%8b-qwen3-30b-a3b-thinking-2507%ef%bc%8c%e5%a4%9a%e9%a1%b9%e8%83%bd%e5%8a%9b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/40525.html","title":{"rendered":"Ali Tongyi Thousand Questions Launches New Reasoning Model Qwen3-30B-A3B-Thinking-2507 with Significant Improvements in Several Capabilities"},"content":{"rendered":"<p>July 31, 2012 - Ali<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae\" title=\"[View articles tagged with [Tongyi Thousand Questions]]\" target=\"_blank\" >Thousand Questions on Tongyi<\/a>Today announced the launch of the new<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e6%8e%a8%e7%90%86%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [inference model]]\" target=\"_blank\" >inference model<\/a> Qwen3-30B-A3B-Thinking-2507. Compared to the Qwen3-30-A3B model, which was open sourced on April 29, the new model offers significant improvements in reasoning, generalization, and context length:<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-40526\" title=\"f4f18c7bj00t08vna0012d000v900gcp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/07\/f4f18c7bj00t08vna0012d000v900gcp.jpg\" alt=\"f4f18c7bj00t08vna0012d000v900gcp\" width=\"1125\" height=\"588\" \/><\/p>\n<ul>\n<li>The new model scored 85.0 on AIME25, which focuses on math proficiency, and 66.0 on LiveCodeBench v6, a test of code proficiency.<strong>Both core reasoning abilities exceed Gemini 2.5-Flash (thinking), Qwen3-235B-A22B (thinking)<\/strong>; the knowledge level of the new model (GPQA, MMLU-Pro) has also improved significantly from the previous version.<\/li>\n<li>In the generalized competency measures of WritingBench, Agent Competency (BFCL-v3), Multi-Round Dialogue, and Multi-Language Instruction Following (MultiIF), Qwen3-30B-A3B-Thinking-2507 outperforms Gemini2.5-Flash (thinking), Qwen3-235B-A22B (thinking), Qwen3-235B-A22B (thinking), and Qwen3-235B-A22B (thinking). 235B-A22B (thinking).<\/li>\n<li>longer contextual understanding, with native support for 256K tokens.<strong>Scalable to 1M tokens<\/strong>.<\/li>\n<\/ul>\n<p>also,<strong>The thinking length of the new model has also been increased by<\/strong>, it is officially recommended to set a longer thinking budget in highly complex reasoning tasks to realize its full potential.<\/p>\n<p>Officially, Qwen3-30B-A3B-Thinking-2507 has been open-sourced in the Magic Hitch community and HuggingFace, and its lightweight size makes it easy to deploy consumer-grade hardware locally; at the same time, it has also launched a new model on Qwen Chat.<\/p>","protected":false},"excerpt":{"rendered":"<p>July 31, 2011 - Ali Tongyi Qianqian today announced the launch of a new reasoning model, Qwen3-30B-A3B-Thinking-2507. Compared to the Qwen3-30-A3B model that was open-sourced on April 29th, the new model has significantly improved its reasoning ability, generalization ability, and context length: the new model scored 85.0 points in the AIME25 test, which focuses on mathematical ability, and 66.0 points in the LiveCodeBench v6, which is a code ability test. The new model scored 85.0 in AIME25, which focuses on math skills, and 66.0 in LiveCodeBench v6, surpassing Gemini2.5-Flash (thinking) and Qwen3-235B-A22B (thinking) in both core reasoning skills; the new model's knowledge level (G<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5023,331],"collection":[],"class_list":["post-40525","post","type-post","status-publish","format-standard","hentry","category-news","tag-5023","tag-331"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/40525","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=40525"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/40525\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=40525"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=40525"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=40525"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=40525"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}