{"id":40736,"date":"2025-08-04T11:34:37","date_gmt":"2025-08-04T03:34:37","guid":{"rendered":"https:\/\/www.1ai.net\/?p=40736"},"modified":"2025-08-04T11:34:37","modified_gmt":"2025-08-04T03:34:37","slug":"%e5%bc%80%e6%ba%90%e5%a4%a7%e6%a8%a1%e5%9e%8b%e5%be%97%e5%88%86%e6%96%b0%e7%ba%aa%e5%bd%95%ef%bc%8c%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89-qwen3-%e6%a8%a1%e5%9e%8b%e6%8b%bf%e4%b8%8b%e5%85%a8%e7%90%83","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/40736.html","title":{"rendered":"Open Source Big Model Scores New Record, Ali Tongyi Qwen3 Model Takes Third Place Worldwide"},"content":{"rendered":"<p>according to<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89\" title=\"[Sees articles with [Ariton] labels]\" target=\"_blank\" >Ali Tongyi<\/a>News, internationally recognized<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e6%a8%a1%e5%9e%8b%e8%af%84%e6%b5%8b\" title=\"[Sees articles containing [large model assessment] labels]\" target=\"_blank\" >Large Model Review<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/chatbot-arena\" title=\"[See articles with [Chatbot Arena] label]\" target=\"_blank\" >Chatbot Arena<\/a> Recently, the latest list was released, Qwen3-235B-A22B-Instruct-2507 got 1433 points, surpassing the top closed-source models Grok4, Claude4, and GPT4.1, and Qwen3 was ranked as the \"third in the world\" in the overall list.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-40737\" title=\"61f6ceb6j00t0g9vo0046d000u00140m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/08\/61f6ceb6j00t0g9vo0046d000u00140m.jpg\" alt=\"61f6ceb6j00t0g9vo0046d000u00140m\" width=\"1080\" height=\"1440\" \/><\/p>\n<p>Chatbot Arena, which uses a blind test evaluation mechanism, is said to be one of the most influential lists in the field of AI macromodeling.<\/p>\n<p>Qwen3's score of 1,433 is the highest score in the history of global open-source big models and Chinese big models. At the same time, Qwen3 was also \"No. 1 in the world\" in 5 key competencies, including math, coding, hard prompts, longer query, and instruction following.<\/p>\n<p>In addition to the Qwen3 Instruct model, a number of models from the Qwen3 family also achieved excellent results:<\/p>\n<p>The reasoning model Qwen3-235B-A22B-Thinking-2507 also broke into the top ten of the list, tying for the world's top spot in math ability;<\/p>\n<p>The programming model Qwen3-Coder performance tied for first place with Gemini2.5 Pro, DeepSeek-R1, and Claude4 in Chatbot Arena's WebDev Arena sublist, which specializes in evaluating programming capabilities.<\/p>","protected":false},"excerpt":{"rendered":"<p>According to Ali Tongyi news, Chatbot Arena, an internationally renowned big model evaluation, has recently announced its latest list, Qwen3-235B-A22B-Instruct-2507 won 1433 points, surpassing the top closed-source models Grok4, Claude4, and GPT4.1, and Qwen3 is ranked as the \"third in the world\" in the overall list. It is reported that Chatbot Arena adopts a blind test evaluation mechanism and is one of the most influential lists in the field of AI models. Qwen3's score of 1,433 is the highest score in the history of global open source big models and Chinese big models. At the same time, Qwen3 also won the \"world's first\" in 5 key competency sub-categories, including math, code (e.g., \"AI\", \"AI\", \"AI\", \"AI\", etc.).<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[7334,7333,3390],"collection":[],"class_list":["post-40736","post","type-post","status-publish","format-standard","hentry","category-news","tag-chatbot-arena","tag-7333","tag-3390"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/40736","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=40736"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/40736\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=40736"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=40736"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=40736"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=40736"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}