{"id":24452,"date":"2024-12-05T02:56:20","date_gmt":"2024-12-04T18:56:20","guid":{"rendered":"https:\/\/www.1ai.net\/?p=24452"},"modified":"2024-12-04T21:56:48","modified_gmt":"2024-12-04T13:56:48","slug":"%e4%ba%9a%e9%a9%ac%e9%80%8a-aws-ai-%e8%ae%ad%e7%bb%83%e8%8a%af%e7%89%87-trainium2-%e5%ae%9e%e4%be%8b%e5%85%a8%e9%9d%a2%e5%8f%af%e7%94%a8%ef%bc%8c%e5%85%ac%e5%b8%83%e4%b8%8b%e4%bb%a3-3nm-trainium3","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/24452.html","title":{"rendered":"Amazon AWS AI Training Chip Trainium2 Instances Fully Available, Announces Next-Gen 3nm Trainium3"},"content":{"rendered":"<p><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e4%ba%9a%e9%a9%ac%e9%80%8a\" title=\"[View articles tagged with [Amazon]]\" target=\"_blank\" >Amazon<\/a> AWS today announced that AI training based on its internal team's development of the<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8a%af%e7%89%87\" title=\"[Sees articles with [chips] labels]\" target=\"_blank\" >chip<\/a> Wide availability of Trn2 instances of Trainium2 and the launch of the Trn2 UltraServer large-scale AI training system, as well as the release of the next generation of the more advanced 3nm process Trainium3 chip.<\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-24453\" title=\"6bb9ab36j00snz2pj005od000uj00knp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/12\/6bb9ab36j00snz2pj005od000uj00knp.jpg\" alt=\"6bb9ab36j00snz2pj005od000uj00knp\" width=\"1099\" height=\"743\" \/><\/p>\n<p>A single Trn2 instance consists of 16 Trainium2 chips interconnected by ultra-high-speed, high-bandwidth, low-latency NeuronLink, providing 20.8 petaflops of peak power for training and deploying B-parameter-sized models.<\/p>\n<p>Amazon claims that Trn2 instances compare favorably to the current generation of GPU-based EC2 P5e and P5en instances<strong>Improved value for money 30-40%<\/strong>.<\/p>\n<p>The larger Trn2 UltraServer aggregates four Trn2 servers with NeuronLink, containing a total of 64 Trainium2 chips, and the peak arithmetic power further scales linearly to 83.2 petaflops, while the larger Trn2 UltraServer aggregates four Trn2 servers with NeuronLink, containing a total of 64 Trainium2 chips.<strong>Capable of meeting the training and deployment needs of the world's largest models today<\/strong>.<\/p>\n<p>Amazon is also working with its AI modeling venture Anthropic to build a giant EC2 UltraCluster compute cluster called Project Rainier, which contains a large number of Trn2 UltraServers.<strong>Hundreds of thousands of Trainium2 chips in total.<\/strong>.<\/p>\n<p>1AI has learned that upon completion of the cluster<strong>Promising to be the largest publicly available AI computing cluster to date<\/strong>This is more than five times the overall power Anthropic currently requires to train the state-of-the-art Claude model.<\/p>\n<p>Amazon AWS also announced its next-generation Trainium3 AI training chip, the first AWS chip product to use a 3nm process. Amazon said<strong>Trainium3-based UltraServer delivers up to 4x the performance of Trn2 UltraServer<\/strong>The first Trainium3-based instances are expected to be available by the end of 2025.<\/p>","protected":false},"excerpt":{"rendered":"<p>Amazon AWS today announced the widespread availability of Trn2 instances based on its in-house team-developed AI training chip, Trainium2, with the launch of the Trn2 UltraServer large-scale AI training system, as well as the release of the next generation of the even more advanced 3nm-processed Trainium3 chip. A single Trn2 instance consists of 16 Trainium2 chips, each interconnected by an ultra-fast, high-bandwidth, low-latency NeuronLink interconnect to deliver 20.8 petaflops of peak power, suitable for training and deploying several B-parameter-sized models. Amazon claims that Trn2 instances are comparable to the current generation of GPU-based EC2<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[370,238],"collection":[],"class_list":["post-24452","post","type-post","status-publish","format-standard","hentry","category-news","tag-370","tag-238"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24452","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=24452"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/24452\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=24452"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=24452"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=24452"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=24452"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}