{"id":42550,"date":"2025-09-07T13:27:57","date_gmt":"2025-09-07T05:27:57","guid":{"rendered":"https:\/\/www.1ai.net\/?p=42550"},"modified":"2025-09-07T13:27:57","modified_gmt":"2025-09-07T05:27:57","slug":"%e5%8f%82%e6%95%b0%e9%87%8f-1t%ef%bc%8c%e9%98%bf%e9%87%8c%e5%ae%98%e6%96%b9%e4%bb%8b%e7%bb%8d%e9%80%9a%e4%b9%89%e6%9c%80%e5%bc%ba%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8bqwen3-max-preview","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/42550.html","title":{"rendered":"Qwen3-Max-Preview, the official introduction of Ali's \"strongest language model of Tongyi\", is 1T."},"content":{"rendered":"<p>September 7 News.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c\" title=\"[View articles tagged with [Ali]]\" target=\"_blank\" >Ali<\/a>Sneak in.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae\" title=\"[View articles tagged with [Tongyi Thousand Questions]]\" target=\"_blank\" >Thousand Questions on Tongyi<\/a>official website, OpenRouter went online with the Qwen3-Max-Preview model and called it the<strong>The most powerful of the Tongyi Thousand Questions series<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [language model]]\" target=\"_blank\" >Language Model<\/a><\/strong>After that, Tongyi Big Model introduced the features of this model through its official microblog. Then, Tongyi Big Model introduced the features that this model has through the official microblogging.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-42551\" title=\"1469d351j00t27dt9003hd000v900hkp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/09\/1469d351j00t27dt9003hd000v900hkp.jpg\" alt=\"1469d351j00t27dt9003hd000v900hkp\" width=\"1125\" height=\"632\" \/><\/p>\n<p>This release of the Qwen3-Max-Preview (Instruct) is a step up from the 2.5 series in terms of<strong>Understanding of English and Chinese, following complex instructions, tool invocation<\/strong>Dimensions such as the realization of a significant enhancement, while<strong>Significantly reduced intellectual illusions<\/strong>, making models smarter and more reliable.<\/p>\n<p>According to the official description, its<strong>The number of participants reaches 1T<\/strong>It also \"led the pack\" in the Arena-Hard v2 benchmark, which measures complex challenges, and scored 80.6 in the AIME25 benchmark, which tests reasoning skills, demonstrating that it is a \"best of breed\" in the world.<strong>Strong logical thinking<\/strong>The model will \"bring a whole new experience\" in handling complex workflows and conducting high-quality open dialogues. The model will \"bring a whole new experience\" in terms of handling complex workflows, having high-quality open conversations, and more.<\/p>\n<p>1AI Attachment Experience Address:<\/p>\n<ul>\n<li>Qwen Chat: https:\/\/chat.qwen.ai\/<\/li>\n<li>AliCloud Hundred Refinement API Service: https:\/\/bailian.console.aliyun.com\/ (search Qwen3-Max-Preview)<\/li>\n<\/ul>\n<p>The official announcement from AliCloud Hundred Refine shows that this update uses the<strong>Step billing by input length<\/strong>Mode.<\/p>","protected":false},"excerpt":{"rendered":"<p>September 7 news, Ali quietly in Tongyi Qianqian official website, OpenRouter online Qwen3-Max-Preview model, and called it the most powerful language model in the Tongyi Qianqian series. Afterwards, Tongyi big model through the official microblogging introduced the features that this model has. Compared with the 2.5 series, Qwen3-Max-Preview (Instruct) has realized significant enhancements in the dimensions of Chinese and English comprehension, complex instruction following, and tool invocation, etc. Meanwhile, it has significantly reduced the knowledge illusion, making the model smarter and more reliable. According to the official introduction, its number of parameters reaches 1T, and it \"leads the list\" in the Arena-Hard v2 benchmark test, which measures the ability of complex challenges, and \"leads the list\" in the test of<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1144,331,1759],"collection":[],"class_list":["post-42550","post","type-post","status-publish","format-standard","hentry","category-news","tag-1144","tag-331","tag-1759"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/42550","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=42550"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/42550\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=42550"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=42550"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=42550"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=42550"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}