{"id":4632,"date":"2024-02-29T09:33:16","date_gmt":"2024-02-29T01:33:16","guid":{"rendered":"https:\/\/www.1ai.net\/?p=4632"},"modified":"2024-02-29T09:33:16","modified_gmt":"2024-02-29T01:33:16","slug":"%e4%ba%ba%e4%ba%ba%e9%83%bd%e6%98%af%e7%a8%8b%e5%ba%8f%e5%91%98%ef%bc%8c%e8%8b%b1%e4%bc%9f%e8%be%be%e8%81%94%e5%90%88%e6%8e%a8%e5%87%ba-starcoder2-%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/4632.html","title":{"rendered":"Everyone is a programmer, NVIDIA jointly launched the StarCoder2 model"},"content":{"rendered":"<p data-vmark=\"61a5\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e4%bc%9f%e8%be%be\" title=\"Look at the article with the label\" target=\"_blank\" >Nvidia<\/a>Recently, we have collaborated with Hugging Face and ServiceNow to release a series of LLMs models called StarCoder2.<strong>It hopes to become a new standard in the field of code generation, with many advantages such as performance, transparency and cost-effectiveness.<\/strong><\/p>\n<p data-vmark=\"ee3a\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-4633\" title=\"2669bcd5-4120-48b4-9d72-4cb77bb26152\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/02\/2669bcd5-4120-48b4-9d72-4cb77bb26152.jpg\" alt=\"2669bcd5-4120-48b4-9d72-4cb77bb26152\" width=\"760\" height=\"428\" \/><\/p>\n<p data-vmark=\"5a2c\">The series of models includes a 3 billion parameter model trained by ServiceNow, a 7 billion parameter model trained by Hugging Face, and a 15 billion parameter model trained by NVIDIA.<\/p>\n<p data-vmark=\"976a\">This was achieved using a new code dataset called Stack v2, which is seven times larger than Stack v1; new training techniques also mean the model can better understand low-resource programming languages like COBOL, mathematics, and program source code discussions.<\/p>\n<p data-vmark=\"52aa\">StarCoder2 has been trained in 619 programming languages and can perform professional tasks such as source code generation, workflow generation, and text summarization. Nvidia said that developers can use it for code completion, advanced code summarization, code snippet retrieval, etc., thereby improving work efficiency.<\/p>\n<p data-vmark=\"dd70\">NVIDIA said that compared with the initial version of StarCoder LLMs, the new 3 billion parameter model further streamlined and screened high-quality parameters, and its performance is equivalent to the initial version of StarCoder with 15 billion parameter models.<\/p>\n<p data-vmark=\"75b5\">StarCoder2 is licensed under the BigCode Open RAIL-M license, which allows royalty-free access and use. Interested users can download it from the BigCode project <a href=\"https:\/\/github.com\/bigcode-project\/starcoder2\" target=\"_blank\" rel=\"noopener\">GitHub<\/a>\u00a0The source code of the page can be obtained from\u00a0<a href=\"https:\/\/huggingface.co\/bigcode\" target=\"_blank\" rel=\"noopener\">Hugging Face<\/a>\u00a0Download the model.<\/p>","protected":false},"excerpt":{"rendered":"<p>NVIDIA, in conjunction with Hugging Face and ServiceNow, recently announced a family of LLMs called StarCoder2, which it hopes will become the new standard in code generation, offering a number of advantages such as performance, transparency and cost-effectiveness. The family of models includes a 3 billion parameter model trained by ServiceNow, a 7 billion parameter model trained by Hugging Face and a 15 billion parameter model trained by NVIDIA. This was achieved by using a new code dataset called Stack v2, which is seven times larger than Stack v1; the new training technique also means that the model can better understand the performance of COBOL and others.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1410,239],"collection":[],"class_list":["post-4632","post","type-post","status-publish","format-standard","hentry","category-news","tag-1410","tag-239"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/4632","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=4632"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/4632\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=4632"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=4632"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=4632"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=4632"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}