{"id":7656,"date":"2024-04-11T09:25:25","date_gmt":"2024-04-11T01:25:25","guid":{"rendered":"https:\/\/www.1ai.net\/?p=7656"},"modified":"2024-04-11T09:25:25","modified_gmt":"2024-04-11T01:25:25","slug":"meta-%e5%8f%91%e5%b8%83%e6%96%b0%e4%b8%80%e4%bb%a3-ai-%e8%ae%ad%e7%bb%83%e4%b8%8e%e6%8e%a8%e7%90%86%e8%8a%af%e7%89%87%ef%bc%8c%e6%80%a7%e8%83%bd%e4%b8%ba%e5%88%9d%e4%bb%a3%e8%8a%af%e7%89%87%e4%b8%89","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/7656.html","title":{"rendered":"Meta releases a new generation of AI training and inference chips, with three times the performance of the first generation chip"},"content":{"rendered":"<p data-vmark=\"0ce1\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/meta\" title=\"[View articles tagged with [Meta]]\" target=\"_blank\" >Meta<\/a> Platforms released the latest version of its Training and Inference Accelerator (MTIA) project on the 10th. MTIA is a custom-designed AI workload by Meta.<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8a%af%e7%89%87\" title=\"[Sees articles with [chips] labels]\" target=\"_blank\" >chip<\/a>series.<\/p>\n<p data-vmark=\"ce39\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7658\" title=\"fe52e9ab-fa7a-4efe-9c69-17245fe87ebe\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/fe52e9ab-fa7a-4efe-9c69-17245fe87ebe.png\" alt=\"fe52e9ab-fa7a-4efe-9c69-17245fe87ebe\" width=\"782\" height=\"621\" \/><\/p>\n<p data-vmark=\"6deb\">It is reported that the new generation of MTIA released this time is better than the first generation MTIA.<span class=\"accentTextColor\">Significantly improved performance and helped strengthen content ranking and recommendation ad models<\/span>Its architecture is fundamentally focused on providing the right balance of compute, memory bandwidth, and memory capacity.<\/p>\n<p data-vmark=\"4e60\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-7657\" title=\"636b74dd-a601-413b-ae08-61e3ba9d9d6e\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/04\/636b74dd-a601-413b-ae08-61e3ba9d9d6e.png\" alt=\"636b74dd-a601-413b-ae08-61e3ba9d9d6e\" width=\"743\" height=\"650\" \/><\/p>\n<p data-vmark=\"c686\">The chip can also help improve training efficiency and make inference (i.e. actual reasoning tasks) easier. Meta said in its official blog post, &quot;Achieving our ambitions for custom chips means investing not only in computing chips, but also in memory bandwidth, networking and capacity, and other next-generation hardware systems.&quot;<\/p>\n<p data-vmark=\"ad69\">Currently, MTIA mainly trains ranking and recommendation algorithms, but Meta said,<span class=\"accentTextColor\">The goal is to eventually expand the chip\u2019s capabilities to start training generative AI, such as its Llama language model.<\/span>.<\/p>\n<p data-vmark=\"204d\">The new generation of MTIA chips released this time are said to be &quot;fundamentally&quot; focused on providing the right balance between computing, memory bandwidth and memory capacity.<span class=\"accentTextColor\">The chip has 256MB of on-chip memory and a clock speed of 1.35GHz, and is manufactured using TSMC&#039;s 5nm process.<\/span>, which is a significant improvement over the first generation product&#039;s 128MB and 800MHz.<\/p>\n<p data-vmark=\"8b53\">Early results from Meta&#039;s testing show that of the four models the company evaluated,<span class=\"accentTextColor\">New chip triples performance of first generation<\/span>.<\/p>\n<p data-vmark=\"4562\">The chip has been deployed in data centers and provides services for AI applications.<\/p>","protected":false},"excerpt":{"rendered":"<p>Meta Platforms released the latest version of its training and reasoning accelerator project (MTIA), a custom chip series designed by Meta specifically for the AI load, on 10th of local time. A new generation of MTIAs was described as having significantly improved performance compared to the first generation of MTIAs and helping to strengthen content ranking and recommend advertising models. Its structure essentially focuses on providing the right balance between computing, memory bandwidth and memory capacity. The chip can also help to improve the efficiency of training and make reasoning (i.e., actual reasoning tasks) easier. Meta, in her official blog post, said, \"Achieving our ambition to customize chips means that we invest not only in computing chips, but also in memory<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[297,238],"collection":[],"class_list":["post-7656","post","type-post","status-publish","format-standard","hentry","category-news","tag-meta","tag-238"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7656","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=7656"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/7656\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=7656"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=7656"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=7656"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=7656"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}