{"id":7571,"date":"2024-04-10T10:05:31","date_gmt":"2024-04-10T02:05:31","guid":{"rendered":"https:\/\/www.1ai.net\/?p=7571"},"modified":"2024-04-10T10:05:31","modified_gmt":"2024-04-10T02:05:31","slug":"meta%e5%8d%b3%e5%b0%86%e6%8e%a8%e5%87%ba%e6%96%b0%e4%b8%80%e4%bb%a3llama3%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/7571.html","title":{"rendered":"Meta is about to launch a new generation of Llama3 large language model"},"content":{"rendered":"<p>According to foreign media reports,<a href=\"https:\/\/www.1ai.net\/en\/tag\/meta\" title=\"[View articles tagged with [Meta]]\" target=\"_blank\" >Meta<\/a>\u00a0Platforms plans to release two small-parameter versions of the Llama3 Large Language Model (LLM) next week as part of the upcoming Llama3 ML model, which will be released in the summer of 2024.<span class=\"spamTxt\">maximum<\/span>Prelude to the version.<\/p>\n<p>It is reported that Llama3<span class=\"spamTxt\">Highest<\/span>The new version may have more than 140 billion parameters, which will hopefully make it more powerful than OpenAI<span class=\"spamTxt\">up to date<\/span>However, the version that Meta will release next week does not yet support multimodal technology.<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7572\" title=\"202307190900246482_0\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/202307190900246482_0.jpg\" alt=\"202307190900246482_0\" width=\"1000\" height=\"561\" \/><\/p>\n<p>This news may trigger strong expectations for Llama 3. After the launch of Llama 2 in July last year, many companies including Google, Musk&#039;s xAI and Mistral have also released<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%bc%80%e6%ba%90%e5%a4%a7%e6%a8%a1%e5%9e%8b\" title=\"[Sees articles with labels]\" target=\"_blank\" >Open Source Big Model<\/a>.<\/p>\n<p>Meta is working hard to make Llama3&#039;s answers more open and accurate. This coincides with the goal that CEO Mark Zuckerberg said in January last year that Meta will invest heavily in building general artificial intelligence (AGI).<\/p>\n<p>Zuckerberg revealed that by the end of 2024, Meta will have nearly 600,000 GPUs in stock, providing strong computing power support for the development of Llama 3. At the same time, Meta also plans to launch a new product this year to help users create their own AI characters.<\/p>\n<p>Meta is making every effort to promote the development of Llama3, which not only reflects the company&#039;s ambition in the field of AI, but also brings users the expectation of more intelligent and open AI services.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta Platforms Inc. plans to launch two small-parameter versions of the Llama3 Large Language Model (LLM) next week as a precursor to the upcoming maximum version of Llama3, which is due to be launched in the summer of 2024, according to foreign media reports. The top version of Llama3 could reportedly have more than 140 billion parameters, which would put its performance on track to catch up with OpenAI's latest GPT-4Turbo version. However, Meta's version to be announced next week does not support multimodal technology at this time. This news could spark strong anticipation for Llama3. After the launch of Llama2 last July, a number of companies, including Google, Musk's xAI and Mistral, have also released open-source big models. Meta is working internally to make Ll<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[297,391],"collection":[],"class_list":["post-7571","post","type-post","status-publish","format-standard","hentry","category-news","tag-meta","tag-391"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7571","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=7571"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7571\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=7571"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=7571"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=7571"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=7571"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}