{"id":5555,"date":"2024-03-15T09:21:10","date_gmt":"2024-03-15T01:21:10","guid":{"rendered":"https:\/\/www.1ai.net\/?p=5555"},"modified":"2024-03-15T09:21:10","modified_gmt":"2024-03-15T01:21:10","slug":"cerebras-%e6%8e%a8%e5%87%ba%e7%ac%ac%e4%b8%89%e4%bb%a3%e6%99%b6%e5%9c%86%e7%ba%a7%e8%8a%af%e7%89%87-wse-3%ef%bc%9a%e5%8f%b0%e7%a7%af%e7%94%b5-5nm-%e5%88%b6%e7%a8%8b%ef%bc%8c%e6%80%a7%e8%83%bd%e7%bf%bb","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/5555.html","title":{"rendered":"Cerebras launches third-generation wafer-level chip WSE-3: TSMC 5nm process, double the performance"},"content":{"rendered":"<p data-vmark=\"acf4\">Wafer level<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8a%af%e7%89%87\" title=\"[Sees articles with [chips] labels]\" target=\"_blank\" >chip<\/a>Innovative Enterprises <a href=\"https:\/\/www.1ai.net\/en\/tag\/cerebras\" title=\"_Other Organiser\" target=\"_blank\" >Cerebras<\/a> Launched its third-generation chip WSE-3, claiming<span class=\"accentTextColor\">With the same power consumption, the performance of WSE-2 is doubled compared to the previous generation product<\/span>.<\/p>\n<p data-vmark=\"9c11\">The WSE-3 parameters are as follows:<\/p>\n<ul class=\"list-paddingleft-2\">\n<li>\n<p data-vmark=\"2604\">TSMC 5nm process;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"cce9\">4 trillion transistors;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"30b9\">900,000 AI cores;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"7e68\">44GB on-chip SRAM cache;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"1336\">Three optional off-chip memory capacities: 1.5TB \/ 12TB \/ 1.2PB;<\/p>\n<\/li>\n<li>\n<p data-vmark=\"53fb\">125 PFLOPS of peak AI computing power.<\/p>\n<\/li>\n<\/ul>\n<p data-vmark=\"0aa2\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5556\" title=\"ef5ce37b-e2b5-4747-b813-3032217d0738\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/ef5ce37b-e2b5-4747-b813-3032217d0738.jpg\" alt=\"ef5ce37b-e2b5-4747-b813-3032217d0738\" width=\"516\" height=\"516\" \/><\/p>\n<p data-vmark=\"9577\">Cerebras claims that the CS-3 system based on the WSE-3 has a memory capacity of up to 1.2PB.<span class=\"accentTextColor\">Train next-generation cutting-edge models that are 10x larger than GPT-4 and Gemini<\/span>It can accommodate models with a parameter scale of 24,000T in a single logical memory space, greatly simplifying the work of developers.<\/p>\n<p data-vmark=\"af1f\">CS-3 is suitable for ultra-large-scale AI needs. A compact four-system cluster can fine-tune a 70B model in one day, and when using the largest cluster of 2048 CS-3 systems, the Llama 70B model can be trained in one day.<\/p>\n<p data-vmark=\"e866\">Cerebras says the CS-3 system is easy to use.<span class=\"accentTextColor\">The code required for large model training is reduced by 97% compared to GPU<\/span>, a standard implementation of the GPT-3 size model can be achieved in just 565 lines of code.<\/p>\n<p data-vmark=\"5e68\">The UAE G42 consortium has said it will build the Condor Galaxy 3 supercomputer based on Cerebras CS-3, which will include 64 systems and provide 8 exaFLOPs of AI computing power.<\/p>","protected":false},"excerpt":{"rendered":"<p>Cerebras, a wafer-level chip innovator, has unveiled its third-generation chip, the WSE-3, which it claims doubles the performance of its predecessor, the WSE-2, at the same power consumption. The WSE-3's specifications are as follows: TSMC 5nm process; 4 trillion transistors; 900,000 AI cores; 44GB on-chip SRAM cache; optional 1.5TB\/12TB\/1.2PB off-chip memory capacity; and 125 PFLOPS of peak AI computational power. Cerebras claims that the WSE-3-based CS-3 system, with up to 1.2PB of memory, can train the next generation of AI cores up to 10 times larger than GPT-4 and Gemini.<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[1686,238],"collection":[],"class_list":["post-5555","post","type-post","status-publish","format-standard","hentry","category-news","tag-cerebras","tag-238"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5555","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=5555"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5555\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=5555"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=5555"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=5555"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=5555"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}