{"id":3362,"date":"2024-01-30T09:44:48","date_gmt":"2024-01-30T01:44:48","guid":{"rendered":"https:\/\/www.1ai.net\/?p=3362"},"modified":"2024-01-30T09:44:48","modified_gmt":"2024-01-30T01:44:48","slug":"meta%e6%9b%b4%e6%96%b0ai%e6%a8%a1%e5%9e%8bcode-llama70b-%e5%87%86%e7%a1%ae%e6%80%a7%e6%9b%b4%e9%ab%98","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/3362.html","title":{"rendered":"Meta updates AI model Code Llama70B with higher accuracy"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/meta\" title=\"[View articles tagged with [Meta]]\" target=\"_blank\" >Meta<\/a><span class=\"spamTxt\">up to date<\/span>Updated its code generation<a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [AI models]]\" target=\"_blank\" >AI Models<\/a>, Code Llama70B, which is &quot;Current<span class=\"spamTxt\">maximum<\/span>,<span class=\"spamTxt\">Optimal<\/span>The Code Llama tool was launched in August 2023 and is free for both research and commercial use. According to a post on Meta\u2019s AI blog, Code Llama 70B can handle more queries than previous versions, which means developers can enter more prompts when programming, and it also has higher accuracy.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3363\" title=\"202207271436142427_0\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/01\/202207271436142427_0.jpg\" alt=\"202207271436142427_0\" width=\"943\" height=\"564\" \/><\/p>\n<p>Code Llama70B achieves an accuracy of 53% on the HumanEval benchmark, surpassing GPT-3.5\u2019s 48.1% and approaching the 67% of GPT-4 reported in an OpenAI paper (PDF).<\/p>\n<p>Code Llama is based on Llama2 and can help developers to generate code based on tips and debug manual code. Meta also launched two other Code Llama tools last fall, Code Llama \u2013 Python and Code Llama \u2013 Application, which focus on specific programming languages\u3002<\/p>\n<p>Code Llama70B has three versions of the code generator, which remains free for both research and commercial use. This large model was trained on 1TB of code and data associated with the code. It is hosted on the code repository Hugging Face, which provides access to GPUs for running AI models.<\/p>\n<p>Meta says its larger models, the 34B and 70B, &quot;can return<span class=\"spamTxt\">most<\/span>results and provide better coding assistance&quot;.<\/p>\n<p>Other AI developers have released code generators in the past year. Amazon\u2019s CodeWhisperer launched in April 2023.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta has recently updated its code generation AI model, Code Llama70B, which is the \"maximum and best model at present\". The Code Llama tool was launched in August 2023 and is free of charge for both research and commercial purposes. According to an article in Meta ' s AI blog, Code Llama70B can handle more queries than the previous version, which means that developers can enter more tips when programming, and it is more accurate. Code Llama70B achieved an accuracy rate of 53% in the HumanEval baseline test, exceeding 48.1% of GPT-3.5 and close to 67% of GPT-4 as reported in a paper by OpenAI (PDF). C<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[167,297],"collection":[],"class_list":["post-3362","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-meta"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3362","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=3362"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3362\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=3362"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=3362"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=3362"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=3362"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}