{"id":5686,"date":"2024-03-18T09:39:22","date_gmt":"2024-03-18T01:39:22","guid":{"rendered":"https:\/\/www.1ai.net\/?p=5686"},"modified":"2024-03-18T09:39:22","modified_gmt":"2024-03-18T01:39:22","slug":"%e6%b6%88%e6%81%af%e7%a7%b0%e8%8b%b1%e4%bc%9f%e8%be%be-blackwellb100gpu-%e5%b0%86%e9%85%8d-192gb-hbm3e-%e6%98%be%e5%ad%98%ef%bc%8cb200-%e9%85%8d-288gb-%e6%98%be%e5%ad%98","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/5686.html","title":{"rendered":"Nvidia Blackwell &quot;B100&quot; GPU to feature 192GB HBM3e memory, B200 to feature 288GB"},"content":{"rendered":"<p data-vmark=\"1f91\"><a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8b%b1%e4%bc%9f%e8%be%be\" title=\"Look at the article with the label\" target=\"_blank\" >Nvidia<\/a>will be holding the GTC 2024 keynote tomorrow, and Jen-Hsun Huang is expected to announce the next-generation GPU architecture called Blackwell.<\/p>\n<p data-vmark=\"f080\">According to XpeaGPU's breaking news, the B100 GPU, which launches tomorrow, will utilize two<span class=\"accentTextColor\">TSMC CoWoS-L<\/span>\u00a0packaging technology<a href=\"https:\/\/www.1ai.net\/en\/tag\/%e8%8a%af%e7%89%87\" title=\"[Sees articles with [chips] labels]\" target=\"_blank\" >chip<\/a>CoWoS (Chip-on-Wafer) is an advanced 2.5D packaging technology that involves stacking chips together to increase processing power while saving space and reducing power consumption.<\/p>\n<p data-vmark=\"46cc\"><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-5687\" title=\"84114403-e77a-4c88-be2f-f2c58b528367\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2024\/03\/84114403-e77a-4c88-be2f-f2c58b528367.jpg\" alt=\"84114403-e77a-4c88-be2f-f2c58b528367\" width=\"1920\" height=\"1080\" \/><\/p>\n<p data-vmark=\"4762\">XpeaGPU reveals that the two compute chips of the B100 GPU will be connected to eight 8-Hi HBM3e memory stacks.<span class=\"accentTextColor\">Total capacity of 192GB<\/span>. It's worth noting that AMD already offers the same capacity of 192GB with 8 HBM3 chips on its Instinct MI300 GPU.<\/p>\n<p data-vmark=\"ac05\">Looking ahead, the tipster claims that the next-gen Blackwell GPU update, codenamed B200, will utilize 12-Hi to achieve higher capacity.<span class=\"accentTextColor\">288GB of video memory<\/span>, but did not reveal whether it was HBM3e or HBM4.<\/p>\n<p data-vmark=\"bdd1\">NVIDIA has previously previewed the superb performance of the B100 GPU.<span class=\"accentTextColor\">and is set to launch in 2024<\/span>, you can expect a follow-up report on how the specific parameters perform.<\/p>","protected":false},"excerpt":{"rendered":"<p>Young Weidar will hold a GTC 2024 keynote address in the morning, and Huang In-hoon is expected to announce the next generation of GPU structures called Blackwell. According to the XpeaGPU explosion, the B100 GPU that will be launched tomorrow will use two chips based on the build-up power technology of CoWos-L. CoWoS (clinic chip) is an advanced 2.5D containment technology involving stacking of chips to improve processing capacity while saving space and reducing power consumption. XpeaGPU reveals that two computing chips for B100 GPU will be connected to eight 8-Hi HBM3e open stacks with a total capacity of 192GB. It's worth noting that AMD has provided 192 GB<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[238,239],"collection":[],"class_list":["post-5686","post","type-post","status-publish","format-standard","hentry","category-news","tag-238","tag-239"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5686","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=5686"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/5686\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=5686"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=5686"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=5686"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=5686"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}