{"id":29827,"date":"2025-03-01T10:48:53","date_gmt":"2025-03-01T02:48:53","guid":{"rendered":"https:\/\/www.1ai.net\/?p=29827"},"modified":"2025-03-01T10:48:53","modified_gmt":"2025-03-01T02:48:53","slug":"openai-gpt-6-%e8%ae%ad%e7%bb%83%e8%a7%84%e6%a8%a1%e5%b0%86%e5%88%9b%e5%8e%86%e5%8f%b2%e6%96%b0%e9%ab%98%ef%bc%9a%e9%a2%84%e4%bc%b0-10-%e4%b8%87%e5%bc%a0-h100-gpu%ef%bc%8cai-%e8%ae%ad%e7%bb%83%e6%88%90","status":"publish","type":"post","link":"https:\/\/www.1ai.net\/en\/29827.html","title":{"rendered":"OpenAI GPT-6 Training to Hit Record Scale: 100,000 H100 GPUs Estimated, AI Training Costs Astronomical"},"content":{"rendered":"<p>March 1, 2011 - Technology media outlet smartprix published a blog post yesterday (February 28) reporting that <a href=\"https:\/\/www.1ai.net\/en\/tag\/openai\" title=\"[View articles tagged with [OpenAI]]\" target=\"_blank\" >OpenAI<\/a> In a video introducing the GPT-4.5 model, the accidental leak of the <a href=\"https:\/\/www.1ai.net\/en\/tag\/gpt-6\" title=\"[SEES ARTICLES WITH [GPT-6] LABELS]\" target=\"_blank\" >GPT-6<\/a> Training may be required <a href=\"https:\/\/www.1ai.net\/en\/tag\/gpu\" title=\"_OTHER ORGANISER\" target=\"_blank\" >GPU<\/a> numbers, suggesting that it will be much larger than ever before.<\/p>\n<p>Note: At the 2:26 mark of the GPT-4.5 model introduction video, the words \"Num GPUs for GPT 6 Training\" appear in the chat transcript of OpenAI's demonstration of GPT 4.5 capabilities.<\/p>\n<p>Although the video does not explain this in any way, the<strong>\"Num\" may suggest an unprecedented number, with the outlet speculating as high as 100,000 GPUs.<\/strong><\/p>\n<p><img loading=\"lazy\" decoding=\"async\" class=\"alignnone size-full wp-image-29828\" title=\"4566b292j00ssfbs5003ud000k800dyp\" src=\"https:\/\/www.1ai.net\/wp-content\/uploads\/2025\/03\/4566b292j00ssfbs5003ud000k800dyp.jpg\" alt=\"4566b292j00ssfbs5003ud000k800dyp\" width=\"728\" height=\"502\" \/><\/p>\n<p>Previously, it was reported that OpenAI used about 10,000 GPUs in training GPT-3, and as the model continues to be iterated, the demand for computational resources continues to increase.<\/p>\n<p>GPT 4.5, internally codenamed \"Orion\", has made significant progress in terms of naturalness and reduction of \"illusions\", and its number of parameters may exceed 3 to 4 trillion. It is estimated that GPT 4.5 was trained using 30,000 to 50,000 NVIDIA H100 GPUs, at a cost of about $750 million to $1.5 billion (IT Home Note: Currently about 5.469 billion to 10.937 billion yuan).<\/p>\n<p>The meaning of \"Num\" in the screenshot is unclear, but it could stand for \"Numerous\", suggesting that GPT-6 is being trained on a much larger scale than ever before. Of course, it's also possible that this is just a smokescreen for OpenAI, like the previous use of \"Strawberry\" as a codename for the o1 series.<\/p>","protected":false},"excerpt":{"rendered":"<p>March 1, 2011 - Tech media outlet smartprix published a blog post yesterday (February 28) reporting that OpenAI accidentally leaked the number of GPUs that may be needed for GPT-6 training in a video introducing the GPT-4.5 model, suggesting that it will be much larger than ever before. Note: At the 2:26 mark of the GPT-4.5 model introduction video, the words \"Num GPUs for GPT 6 Training\" appear in the chat transcript of OpenAI's demonstration of GPT 4.5 features. While the video doesn't explain this in any way, the \"Num\" may suggest an unprecedented number of GPUs<\/p>","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[146],"tags":[5844,415,190],"collection":[],"class_list":["post-29827","post","type-post","status-publish","format-standard","hentry","category-news","tag-gpt-6","tag-gpu","tag-openai"],"acf":[],"_links":{"self":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29827","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/comments?post=29827"}],"version-history":[{"count":0,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/posts\/29827\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/media?parent=29827"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/categories?post=29827"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/tags?post=29827"},{"taxonomy":"collection","embeddable":true,"href":"https:\/\/www.1ai.net\/en\/wp-json\/wp\/v2\/collection?post=29827"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}