{"id":20245,"date":"2024-09-20T09:46:35","date_gmt":"2024-09-20T01:46:35","guid":{"rendered":"https:\/\/www.1ai.net\/?p=20245"},"modified":"2024-09-20T09:46:35","modified_gmt":"2024-09-20T01:46:35","slug":"%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae%e5%bc%80%e6%ba%90-qwen2-5-%e5%a4%a7%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%8f%b7%e7%a7%b0%e6%80%a7%e8%83%bd%e8%b6%85%e8%b6%8a-llama","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/20245.html","title":{"rendered":"Ali Tongyi Qianqian open source Qwen2.5 large model, claiming performance beyond Llama"},"content":{"rendered":"<p>At the 2024 Cloud Race, AliCloud CTO Zhou Jingren released the<strong><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae\" title=\"[View articles tagged with [Tongyi Thousand Questions]]\" target=\"_blank\" >Thousand Questions on Tongyi<\/a>New Generation<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a>Model Qwen2.5<\/strong>The flagship model, the Qwen 2.5-72B, claims to outperform the Llama 405B.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20247\" title=\"ba7e1b85j00sk38w4003yd001hc00u0m\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/09\/ba7e1b85j00sk38w4003yd001hc00u0m.jpg\" alt=\"ba7e1b85j00sk38w4003yd001hc00u0m\" width=\"1920\" height=\"1080\" \/><\/p>\n<p>Qwen2.5 covers large language models, multimodal models, mathematical models, and code models in multiple sizes, each with a base version, a command-following version, and a quantization version, totaling more than 100 models on the shelf.<\/p>\n<ul>\n<li><strong>Qwen2.5 Language Model:<\/strong>0.5B, 1.5B, 3B, 7B, 14B, 32B and 72B.<\/li>\n<li><strong>Qwen2.5-Coder Programming Model:<\/strong>1.5B, 7B and soon 32B.<\/li>\n<li><strong>Qwen2.5-Math mathematical modeling:<\/strong>1.5B, 7B, and 72B.<\/li>\n<\/ul>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-20246\" title=\"84037c52j00sk38w40017d0014000ham\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/09\/84037c52j00sk38w40017d0014000ham.jpg\" alt=\"84037c52j00sk38w40017d0014000ham\" width=\"1440\" height=\"622\" \/><\/p>\n<p>With the exception of the 3B and 72B versions, all of Tongyi Qianqian's open source models use the\u00a0<strong>Apache 2.0 license<\/strong>The license file can be found in the appropriate Hugging Face repository. Users can find the license file in the appropriate Hugging Face repository.<\/p>\n<p>In addition, Tongyi Qianwen also provides APIs for its flagship language models, Qwen-Plus and Qwen-Turbo, through Model Studio, and has open-sourced Qwen2-VL-72B, which has improved performance compared to last month's release.<\/p>\n<p>AliCloud officially revealed that as of mid-September 2024, the cumulative number of downloads of the Tongyi Thousand Questions open source model<strong>It's over 40 million.<\/strong>It has become a world-class group of models, second only to Llama.<\/p>\n<p>From the 2024 cloud conference was informed that AliCloud announced a price reduction across the board for the Tongyi Thousand Questions recommendation model, with a maximum reduction of up to 85%.<\/p>","protected":false},"excerpt":{"rendered":"<p>At the 2024 Cloud Root Conference, AliCloud CTO Zhou Jingren released Qwen2.5, a new generation of open source models, of which the flagship model, Qwen2.5-72B, is claimed to outperform the Llama 405B. Qwen2.5 covers multiple sizes of large language models, multimodal models, mathematical models, and code models, and each size has a base version, a command-following version, quantized versions, totaling more than 100 models on the shelf. Qwen2.5 Language Models: 0.5B, 1.5B, 3B, 7B, 14B, 32B, and 72B; Qwen2.5-Coder Programming Models: 1.5B, 7B, and soon 32B; Qwen2.5-Math Mathematical<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[216,219,331],"collection":[],"class_list":["post-20245","post","type-post","status-publish","format-standard","hentry","category-news","tag-216","tag-219","tag-331"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/20245","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=20245"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/20245\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=20245"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=20245"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=20245"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=20245"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}