{"id":8416,"date":"2024-04-19T09:31:12","date_gmt":"2024-04-19T01:31:12","guid":{"rendered":"https:\/\/www.1ai.net\/?p=8416"},"modified":"2024-04-19T09:35:50","modified_gmt":"2024-04-19T01:35:50","slug":"meta-%e5%8f%91%e5%b8%83-llama-3%ef%bc%8c%e5%8f%b7%e7%a7%b0%e6%98%af%e6%9c%80%e5%bc%ba%e5%a4%a7%e7%9a%84%e5%bc%80%e6%ba%90%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/8416.html","title":{"rendered":"Meta releases Llama 3, the most powerful open source large language model"},"content":{"rendered":"<p data-vmark=\"42a9\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/meta\" title=\"[View articles tagged with [Meta]]\" target=\"_blank\" >Meta<\/a> The company issued a press release today announcing the launch of its next-generation<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e5%a4%a7%e8%af%ad%e8%a8%80%e6%a8%a1%e5%9e%8b\" title=\"[View articles tagged with [large language model]]\" target=\"_blank\" >Large Language Model<\/a> <a href=\"https:\/\/www.1ai.net\/en\/tag\/llama-3\" title=\"[See articles with [Llama 3] labels]\" target=\"_blank\" >Llama 3<\/a>The new model has two versions, with parameters of 8 billion and 70 billion.<strong>It is claimed to be the most powerful open source large language model.<\/strong><\/p>\n<p data-vmark=\"f86e\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-8417\" title=\"68384f43-2568-49bd-979e-467dbd803ab1\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/68384f43-2568-49bd-979e-467dbd803ab1.png\" alt=\"68384f43-2568-49bd-979e-467dbd803ab1\" width=\"1187\" height=\"668\" \/><\/p>\n<p data-vmark=\"ce0b\">Meta claims that Llama 3 outperforms Claude Sonnet, Mistral Medium, and GPT-3.5, and IT House has attached the key features of Llama 3 below:<\/p>\n<h3 data-vmark=\"e0e9\">Open to all:<\/h3>\n<p data-vmark=\"1864\">Meta opens up an 8 billion parameter version of Llama 3, making cutting-edge AI technology accessible to all. Developers, researchers, and the curious around the globe can play, build, and experiment.<\/p>\n<h3 data-vmark=\"5988\">Smarter and safer:<\/h3>\n<p data-vmark=\"b2bb\">Llama 3 sets a new standard, demonstrating amazing reasoning capabilities and a greater ability to follow instructions.Meta also strongly emphasizes the safe and responsible use of AI.<\/p>\n<p data-vmark=\"2f45\">In addition to Llama 3, Meta has released new trust and security tools, including Llama Guard 2, Code Shield and CyberSec Eval 2.<\/p>\n<h3 data-vmark=\"b997\">Upcoming expansion of the application ecosystem<\/h3>\n<p data-vmark=\"1f94\">Meta will soon integrate Llama 3 in Facebook, Instagram, WhatsApp, and other apps to bring users a superior AI experience.<\/p>\n<h3 data-vmark=\"d015\">Multimodality<\/h3>\n<p data-vmark=\"e090\">Llama 3 can not only process text, but can also understand images and videos. meta is also training larger models with over 400 billion parameters.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta Corporation today issued a press release announcing the release of Llama 3, the next-generation large language model with 80 and 70 billion parameters, which is claimed to be the most powerful open-source large language model. Meta claims that Llama 3 outperforms Claude Sonnet, Mistral Medium, and GPT-3.5, and IT House has attached the following key features of Llama 3: Open to All: Meta is opening up the 8 billion-parameter version of Llama 3 to give everyone access to the most cutting-edge AI technology. Developers, researchers, and the curious around the globe can play, build, and experiment. Smarter and safer: Lla<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[148,146],"tags":[1671,297,706],"collection":[],"class_list":["post-8416","post","type-post","status-publish","format-standard","hentry","category-headline","category-news","tag-llama-3","tag-meta","tag-706"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/8416","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=8416"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/8416\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=8416"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=8416"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=8416"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=8416"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}