{"id":3600,"date":"2024-02-03T09:12:47","date_gmt":"2024-02-03T01:12:47","guid":{"rendered":"https:\/\/www.1ai.net\/?p=3600"},"modified":"2024-02-03T09:12:47","modified_gmt":"2024-02-03T01:12:47","slug":"meta-%e8%ae%a1%e5%88%92%e4%ba%8e%e4%bb%8a%e5%b9%b4%e9%83%a8%e7%bd%b2%e8%87%aa%e5%ae%b6-ai%e8%8a%af%e7%89%87%ef%bc%8c%e5%87%8f%e5%b0%91%e5%af%b9-nvidia-gpu-%e7%9a%84%e4%be%9d%e8%b5%96","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/3600.html","title":{"rendered":"Meta plans to deploy its own AI chips this year to reduce its reliance on Nvidia GPUs"},"content":{"rendered":"<p>social media giant <a href=\"https:\/\/www.1ai.net\/en\/tag\/meta\" title=\"[View articles tagged with [Meta]]\" target=\"_blank\" >Meta<\/a> plans to deploy a customized, second-generation data center this year in its <a href=\"https:\/\/www.1ai.net\/en\/tag\/ai%e8%8a%af%e7%89%87\" title=\"[SEES ARTICLES WITH TAGS]\" target=\"_blank\" >AI Chips<\/a>\"Artemis.\"\u3002<\/p>\n<p>According to Reuters, the new chip will be used for \u201cinferment\u201d in Meta's data centre, i.e. the process of running the AI model. The objective of the initiative is to reduce reliance on the Nvidia chip and to control the cost of the AI workload. In addition, Meta provides an AI-generated application in its services and is training an open source model called Llama3 to reach GPT-4 levels\u3002<\/p>\n<p class=\"article-content__img\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-3601\" title=\"202310191515196113_12\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/02\/202310191515196113_12.jpg\" alt=\"202310191515196113_12\" width=\"1000\" height=\"666\" \/><\/p>\n<p>Source Note: The image is generated by AI, and the image is authorized by Midjourney<\/p>\n<p>Meta CEO Mark Zuckerberg recently announced that he plans to end the year with 340,000 Nvidia H100<a href=\"https:\/\/www.1ai.net\/en\/tag\/gpu\" title=\"_OTHER ORGANISER\" target=\"_blank\" >GPU<\/a>In total, about 600,000 GPUs are used to run and train AI systems. This makes Meta the only company other than Microsoft that Nvidia has<span class=\"spamTxt\">maximum<\/span>of public customers. However, with more powerful and larger scale models, AI workloads and costs continue to increase. In addition to Meta, companies like OpenAI and Microsoft are trying to break this cost spiral with proprietary AI chips and more efficient models.<\/p>\n<p>In May 2023, Meta\u00a0<span class=\"spamTxt\">first<\/span>has launched a new family of chips called Meta Training and Inference Accelerator (MTIA), designed to accelerate and reduce the cost of running neural networks. According to the official announcement<span class=\"spamTxt\">First<\/span>The chips are expected to be in service in 2025 and were already being tested in Meta's data centers at that time. According to Reuters, Artemis is already one of MTIA's more<span class=\"spamTxt\">advanced<\/span>Version.<\/p>\n<p>Meta's initiative demonstrates their desire to reduce reliance on Nvidia chips through the deployment of their own AI chips, as well as the cost of controlling the AI workload. They plan to put the Artemis chip into production this year, and say, \"We think that the accelerator that we're developing on our own and the GPU that can be purchased on the market provide performance and efficiency on a specific task load in Meta<span class=\"spamTxt\">optimal<\/span>Group.\u201d This initiative will bring greater flexibility and autonomy to Meta and is expected to reduce the cost of the AI workload\u3002<\/p>","protected":false},"excerpt":{"rendered":"<p>The social media giant Meta plans to deploy this year a customized second-generation AI chip called \"Artemis.\" According to Reuters, the new chip will be used for \u201cinferment\u201d in Meta's data centre, i.e. the process of running the AI model. The objective of the initiative is to reduce reliance on the Nvidia chip and to control the cost of the AI workload. In addition, Meta provides an AI-generated application in its services and is training an open source model called Llama3 to reach GPT-4 levels. Source note: Image generated by AI, photo authorized service provider Midjourney Meta<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1099,415,297],"collection":[],"class_list":["post-3600","post","type-post","status-publish","format-standard","hentry","category-news","tag-ai","tag-gpu","tag-meta"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3600","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=3600"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/3600\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=3600"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=3600"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=3600"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=3600"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}