{"id":22968,"date":"2024-11-12T19:20:16","date_gmt":"2024-11-12T11:20:16","guid":{"rendered":"https:\/\/www.1ai.net\/?p=22968"},"modified":"2024-11-12T19:20:16","modified_gmt":"2024-11-12T11:20:16","slug":"%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae%e5%bc%80%e6%ba%90-qwen2-5-coder-%e5%85%a8%e7%b3%bb%e5%88%97%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%8f%b7%e7%a7%b0%e4%bb%a3%e7%a0%81%e8%83%bd%e5%8a%9b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/22968.html","title":{"rendered":"Ali Tongyi Thousand Questions open source Qwen2.5-Coder full range of models, claiming that the code ability to tie GPT-4o"},"content":{"rendered":"<p>November 12 news.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c\" title=\"[View articles tagged with [Ali]]\" target=\"_blank\" >Ali<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae\" title=\"[View articles tagged with [Tongyi Thousand Questions]]\" target=\"_blank\" >Thousand Questions on Tongyi<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a> Qwen2.5-Coder full series of models, of which Qwen2.5-Coder-32B-Instruct became the current open source model of SOTA, the official claim that the code ability to tie GPT-4o.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-22969\" title=\"b17c8cdej00smu4rp002bd000v900rsp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/11\/b17c8cdej00smu4rp002bd000v900rsp.jpg\" alt=\"b17c8cdej00smu4rp002bd000v900rsp\" width=\"1125\" height=\"1000\" \/><\/p>\n<p>Qwen2.5-Coder-32B-Instruct, as the flagship model of this open source, has achieved the best performance among open source models on several popular code generation benchmarks (e.g., EvalPlus, LiveCodeBench, BigCodeBench), and is officially claimed to reach a competitive performance with GPT-4o.<\/p>\n<p>Ali Tongyi Qianqian previously open source 1.5B, 7B two sizes, this open source brings 0.5B, 3B, 14B, 32B four sizes, covering the mainstream six model sizes.<\/p>\n<p>The Qwen2.5-Coder 0.5B \/ 1.5B \/ 7B \/ 14B \/ 32B models were used\u00a0<strong>Apache 2.0<\/strong>\u00a0license, the 3B model is licensed under the Research Only license.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-22970\" title=\"8cc5598ej00smu4sk0039d000v900iyp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/11\/8cc5598ej00smu4sk0039d000v900iyp.jpg\" alt=\"8cc5598ej00smu4sk0039d000v900iyp\" width=\"1125\" height=\"682\" \/><\/p>\n<p>The relevant links are as follows:<\/p>\n<ul class=\"medium-size list-paddingleft-2\">\n<li>\n<p data-vmark=\"4567\"><strong>Github<\/strong>:<a href=\"https:\/\/github.com\/QwenLM\/Qwen2.5-Coder\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/github.com\/QwenLM\/Qwen2.5-Coder<\/span><\/a><\/p>\n<\/li>\n<li>\n<p data-vmark=\"89f7\"><strong>Huggingface<\/strong>:<a href=\"https:\/\/huggingface.co\/collections\/Qwen\/qwen25-coder-66eaa22e6f99801bf65b0c2f\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/huggingface.co\/collections\/Qwen\/qwen25-coder-66eaa22e6f99801bf65b0c2f<\/span><\/a><\/p>\n<\/li>\n<li>\n<p data-vmark=\"0a85\"><strong>Modelscope<\/strong>:<a href=\"https:\/\/modelscope.cn\/organization\/qwen\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/modelscope.cn\/organization\/qwen<\/span><\/a><\/p>\n<\/li>\n<li>\n<p data-vmark=\"0068\"><strong>demonstrations<\/strong>:<a href=\"https:\/\/huggingface.co\/spaces\/Qwen\/Qwen2.5-Coder-demo\" target=\"_blank\" rel=\"noopener\"><span class=\"link-text-start-with-http\">https:\/\/huggingface.co\/spaces\/Qwen\/Qwen2.5-Coder-demo<\/span><\/a><\/p>\n<\/li>\n<\/ul>","protected":false},"excerpt":{"rendered":"<p>On November 12th, Ali Thong Yi asked the open source Qwen2.5-Coder full series of models, in which Qwen2.5-Coder-32B-Instract became the current SOTA open source model with official code-capable GPT-4o. Qwen2.5-Coder-32B-Instruct, the flagship model of this open source, has achieved the best performance in the open source model in a number of popular code generation benchmarks (e.g. EvalPlus, LiveCodeBench, BigCodeBench), officially known as achieving and GPT-4o competitive performance. Ali Tun Yin asked for an open source. 1.5B, 7B, two feet<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[219,331,1759],"collection":[],"class_list":["post-22968","post","type-post","status-publish","format-standard","hentry","category-news","tag-219","tag-331","tag-1759"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/22968","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=22968"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/22968\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=22968"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=22968"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=22968"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=22968"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}