{"id":7465,"date":"2024-04-09T09:36:33","date_gmt":"2024-04-09T01:36:33","guid":{"rendered":"https:\/\/www.1ai.net\/?p=7465"},"modified":"2024-04-09T09:36:33","modified_gmt":"2024-04-09T01:36:33","slug":"%e9%98%bf%e9%87%8c%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae%e5%bc%80%e6%ba%90-320-%e4%ba%bf%e5%8f%82%e6%95%b0%e6%a8%a1%e5%9e%8b%ef%bc%8c%e5%b7%b2%e5%ae%9e%e7%8e%b0-7-%e6%ac%be%e5%a4%a7%e8%af%ad%e8%a8%80","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/7465.html","title":{"rendered":"Alibaba Tongyi Qianwen open-sources 32 billion parameter models and has achieved full open-source of 7 major language models"},"content":{"rendered":"<p data-vmark=\"fce4\">April 7,<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%98%bf%e9%87%8c\" title=\"[View articles tagged with [Ali]]\" target=\"_blank\" >Ali<\/a>cloud<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e9%80%9a%e4%b9%89%e5%8d%83%e9%97%ae\" title=\"[View articles tagged with [Tongyi Thousand Questions]]\" target=\"_blank\" >Thousand Questions on Tongyi<\/a><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90\" title=\"[View articles tagged with [open source]]\" target=\"_blank\" >Open Source<\/a> 32 billion parameter model Qwen1.5-32B. IT Home noted that Tongyi Qianwen has previously open-sourced 6 models with 500 million, 1.8 billion, 4 billion, 7 billion, 14 billion and 72 billion parameters.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large language model]]\" target=\"_blank\" >Large Language Model<\/a>.<\/p>\n<p data-vmark=\"c90d\">The open-source 32 billion parameter model will achieve a more ideal balance between performance, efficiency, and memory usage. For example, compared with the 14B open-source model of Tongyi Qianwen, the 32B model is more capable in intelligent agent scenarios; compared with the 72B open-source model of Tongyi Qianwen, the 32B model has a lower reasoning cost. The Tongyi Qianwen team hopes that the 32B open-source model can provide enterprises and developers with a more cost-effective model option.<\/p>\n<p data-vmark=\"457a\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7466\" title=\"acaf3c81-89af-45ef-a426-f9ede31af43f\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/acaf3c81-89af-45ef-a426-f9ede31af43f.png\" alt=\"acaf3c81-89af-45ef-a426-f9ede31af43f\" width=\"544\" height=\"416\" \/><\/p>\n<p data-vmark=\"ced4\">At present, Tongyi Qianwen has open-sourced 7 major language models, and the cumulative download volume in open source communities at home and abroad has exceeded 3 million.<\/p>","protected":false},"excerpt":{"rendered":"<p>On April 7, AliCloud Tongyi Qianqian open-sourced the 32 billion parameter model Qwen1.5-32B. IT House noted that Tongyi Qianqian had previously open-sourced six large language models with 500 million, 1.8 billion, 4 billion, 7 billion, 14 billion, and 72 billion parameters. The 32 billion parameter model open-sourced this time will achieve a better balance between performance, efficiency and memory usage. For example, compared with the Tongyiqian 14B open source model, 32B is more capable in intelligent body scenarios, and compared with the Tongyiqian 72B open source model, 32B has a lower inference cost. The Tongyiqian team hopes that the 32B open source model can provide a more cost-effective model choice for enterprises and developers. Currently, Tongyiqian has open-sourced a total of 7 models of big words.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[706,219,331,1759],"collection":[],"class_list":["post-7465","post","type-post","status-publish","format-standard","hentry","category-news","tag-706","tag-219","tag-331","tag-1759"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7465","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=7465"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7465\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=7465"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=7465"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=7465"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=7465"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}