-
Before Intel, CEO's "Stabbing" AI foam: GPU will not live for 10 years
On December 2, Intel's former CEO Pat Gelsinger said in an interview with the Financial Times recently that quantum calculations would replace GPUs over the next 10 years and could detonate the current artificial intelligence bubble. He stressed that quantum calculations together with classical and artificial intelligence calculations form the "triples" of IT, in which quantum technology is at the threshold of rapid breakthrough. Gersinger notes that quantum computers may be in the mainstream in two years, which is in line with the earlier proposal by the CEO of In-hoon Wong to calculate quantum to.. -
€1 BILLION DEPLOYMENT: $10,000 GPU: HUANG IN-HOON WILL BE EXPOSED TO BERLIN NEXT MONTH AND WILL ANNOUNCE THE EUROPEAN AI NUCLEUS PROJECT
ON OCTOBER 28TH, BLOOMBERG PUBLISHED YESTERDAY (27 OCTOBER) AN ARTICLE IN WHICH IT REPORTED THAT NVIDIA HAD ANNOUNCED THAT IT WOULD CO-INVEST 1 BILLION EUROS (NOTE: THE CURRENT EXCHANGE RATE IS ABOUT RMB 828.1 BILLION) IN COOPERATION WITH DEUTSCHE TELEKOM IN GERMANY TO BUILD A NEW ARTIFICIAL INTELLIGENCE (AI) DATA CENTRE IN MUNICH, GERMANY, AND THAT PLANS WOULD BE ANNOUNCED NEXT MONTH. THE PROJECT IS EXPECTED TO HOST MUNICH, GERMANY, WITH A TOTAL INVESTMENT OF EURO1 BILLION. WHEN IT'S BUILT, EUROPE'S LARGEST ENTERPRISE SOFTWARE COMPANY, SAP, WILL BECOME ITS CALCULATOR..- 1.3k
-
OpenAI Announces Stargate Norway, the First European AI Data Center Project: 100,000 GPUs to be Deployed Within the Next Year
August 1, 2012 - OpenAI announced yesterday that it will build the Stargate Norway data center project in Norway with partners Nscale and Aker. This is OpenAI's first AI data center initiative in Europe and the second leg of the OpenAI for Countries non-US sovereign AI facility initiative in the United Arab Emirates (UAE). Compared to the previous Stargate UAE, which had a total capacity of 1GW, Stargate Norw...- 6.9k
-
Beijing: Support for Private Enterprises Procuring Independently Controllable GPU Chips for Intelligent Computing Services in Accordance with a Certain Percentage of Investment Amounts
April 29 news, "Beijing Municipality to promote the healthy development of the private economy, high-quality development 2025 work points" recently released, arranged seven major parts of 25 aspects of 59 measures. 1AI noted that, among other things, it is mentioned that Beijing will support private enterprises to build smart computing centers, and support enterprises that purchase independently controlled GPU chips to carry out smart computing services in accordance with a certain percentage of the amount of investment, and will also focus on supporting private enterprises to participate in the construction of green innovation platforms. It also proposes to recommend high-quality private innovation enterprises to declare major national scientific and technological special projects, and to promote major national scientific research infrastructures...- 2.8k
-
Musk's AI supercomputing details revealed: $400 million has been invested, millions of GPUs have big power gaps
April 2, 2011 - Elon Musk has said that his artificial intelligence startup xAI will build the world's largest supercomputer in Memphis, Tennessee. Documents seen by Business Insider show that the company is investing hundreds of millions of dollars in the project, but faces a large power deficit. Since the project was first announced in June 2024, xAI has submitted 14 applications for building permits to the Memphis Planning and Development Agency, with a total estimated cost of $405.9 million ($2.9...)- 3.5k
-
OpenAI GPT-6 Training to Hit Record Scale: 100,000 H100 GPUs Estimated, AI Training Costs Astronomical
March 1, 2011 - Tech media outlet smartprix published a blog post yesterday (February 28) reporting that OpenAI accidentally leaked the number of GPUs that may be needed for GPT-6 training in a video introducing the GPT-4.5 model, suggesting that it will be much larger than ever before. Note: At the 2:26 mark of the GPT-4.5 model introduction video, the chat transcript of OpenAI's demonstration of GPT 4.5's capabilities includes the phrase "Num GPUs for GPT 6 Training"...- 3.7k
-
Altman admits OpenAI is short of GPUs, GPT-4.5 can only be rolled out in phases
Feb. 28, 2012 - OpenAI CEO Michael Altman said today that the company had to roll out its latest GPT-4.5 model in stages due to a "shortage of GPU resources. In a post on the X platform, Altman noted that GPT-4.5 is a "huge" and "expensive" model, requiring tens of thousands of additional GPUs to make it available to more ChatGPT users. GPT-4.5 will be available to ChatGPT Pro subscribers first, followed by ChatGPT Plus subscribers next week... -
Meta CEO Zuckerberg: AI team to expand dramatically this year, over 1.3 million GPUs by the end of the year
January 24, Meta CEO Zuckerberg said on January 24 local time on the social platform Facebook, will achieve about 1GW of online computing in 2025, by the end of the year Meta will have more than 1.3 million GPUs. Zuckerberg said Meta plans to invest 60 billion to 65 billion U.S. dollars this year (Note: currently about 437.178 billion to 473.609 billion yuan) for capital expenditures, while significantly developing the artificial intelligence team. Zuckerberg said Meta plans to invest between $60 billion and $65 billion this year (note: currently about RMB 437.178 billion to RMB 473.609 billion) in capital expenditures, as well as significantly grow its artificial intelligence team. With Meta going ...- 4.4k
-
UK Government Plans to Procure 100,000 GPUs to Boost Public Sector AI Arithmetic by 20x
January 13, 2011 - British Prime Minister David Starmer has promised that the UK government will purchase up to 100,000 GPUs by 2030, which means a 20-fold increase in the UK's sovereign AI arithmetic, mainly for AI applications in academia and public services. According to 1AI, the UK already has two advanced supercomputers, Isambard-AI at the University of Bristol and Dawn at the University of Cambridge.Isambard-AI is equipped with around 5,000 GPUs, specialized chips that are at the core of the AI software built...- 3.6k
-
IBM's New Optical Technology Reduces GPU Idle Time, Dramatically Speeds AI Model Training
Dec. 11 (Bloomberg) -- IBM has announced the development of a new optical technology that can train AI models at the speed of light while saving significant energy. By applying this breakthrough to data centers, the company says that training one AI model saves as much energy as 5,000 U.S. homes consume in a year. The company explained that while data centers are connected to the outside world through fiber optic cables, copper wires are still used internally. These copper wires are connected to GPU gas pedals, which spend a lot of time idle while waiting for data from other devices...- 2.3k
-
Denmark's First AI Supercomputer, Gefion, Launched, Powered by 1528 NVIDIA H100 GPUs
October 27 news, Denmark launched the country's first AI supercomputer, named after the Danish mythological goddess Gefion, aimed at promoting breakthroughs in quantum computing, clean energy, biotechnology and other fields, NVIDIA CEO Jen-Hsun Huang and the King of Denmark attended the unveiling ceremony. Gefion is a NVIDIA DGX SuperPOD supercomputer powered by 1528 NVIDIA H100 Tensor Core GPUs using NVIDIA's Quantum-2 InfiniBand net...- 6.6k
-
Larry Ellison and Elon Musk "beg" Nvidia's Jen-Hsun Huang for more GPUs at dinner
At a meeting with analysts last week, billionaire Oracle co-founder and CTO Larry Ellison told the audience that he and Elon Musk, the world's richest man, took NVIDIA CEO Jensen Huang to Nobu Palo Alto for dinner and "begged" Huang to give them more GPUs. "I would describe the dinner as Oracle - Elon and I begging Jensen for GPUs," Ellison recalls. "Please take our money. Please take our money. By the way, I'm having dinner. No, no, no, eat more. We need you to have more...- 7.3k
-
SenseTime: The domestically-built AI computing cluster currently has 54,000 GPUs, with a maximum computing power of 20,000 GPUs.
According to Jiemian News, at the 2024 REAL Technology Conference held today, Luan Qing, general manager of SenseTime Digital Entertainment Division, introduced that the domestic artificial intelligence computing cluster invested and built by SenseTime currently has 54,000 GPUs, with a maximum computing power of 20,000P. Luan Qing said that SenseTime is investing in the construction of the country's largest artificial intelligence data center in Lingang, Shanghai, and the country's computing nodes are spread across Shanghai, Guangzhou, Chongqing, Shenzhen, Fuzhou and other places. According to previous reports by IT Home, SenseTime's semi-annual report data as of June 30, 2024 showed that in the first half of 2024,…- 8.1k
-
Meta training Llama 3 encountered frequent failures, and the 16,384 H100 GPU training cluster "struck" every 3 hours
A research report released by Meta shows that its 16,384 NVIDIA H100 graphics card cluster used to train the 405 billion parameter model Llama 3 experienced 419 unexpected failures in 54 days, an average of one every three hours. More than half of the failures were caused by the graphics card or its high-bandwidth memory (HBM3). Due to the huge scale of the system and the high synchronization of tasks, a single graphics card failure may cause the entire training task to be interrupted and need to be restarted. Despite this, the Meta team has maintained more than 90% of effective...- 9.4k
-
Cloud computing company Lambda launches new cluster service to get Nvidia H100 GPUs on demand
Recently, GPU cloud computing company Lambda announced the launch of its new 1-Click cluster service, where customers can now get Nvidia H100 GPU and Quantum2 InfiniBand clusters on demand. This innovative service enables enterprises to obtain computing power only when needed, especially for those companies that do not need to use GPUs 24 hours a day. Source Note: The image is generated by AI, and Robert, co-founder and vice president of Midjourney Lambda, a picture licensing service provider, said:- 5.6k
-
Grok2 is about to release xAI to accelerate AI competition: 100,000 GPU supercomputers will be delivered by the end of this month
Musk announced on July 9 that his artificial intelligence company xAI is building a supercomputer with 100,000 Nvidia H100 GPUs, which is expected to be delivered and start training at the end of this month. This move marks the end of xAI's negotiations with Oracle to expand its existing agreement and lease more Nvidia chips. Musk emphasized that this will become "the most powerful training cluster in the world, and the lead is huge." He said that xAI's core competitiveness lies in speed, "which is the only way to close the gap." Prior to this, xA…- 7.9k
-
Hugging Face, the world's largest open source AI community, will provide $10 million in shared GPUs for free to help small businesses compete with large companies
Hugging Face, the world's largest open source AI community, recently announced that it will provide $10 million in free shared GPUs to help developers create new AI technologies. Specifically, the purpose of Hugging Face's move is to help small developers, researchers, and startups fight against large AI companies and prevent AI progress from falling into "centralization." Hugging Face CEO Clem Delangue was interviewed by The Verge...- 3.9k
-
Intel's Falcon Shores GPU is coming later next year and has been redesigned for AI workloads
Intel made it clear at its first quarter earnings conference call at the end of last month that the Falcon Shores GPU will be launched in late 2025. According to foreign media HPCwire, the processor is being redesigned to meet the needs of the AI industry. Intel CEO Pat Gelsinger said that Falcon Shores will combine a fully programmable architecture with the excellent system performance of the Gaudi 3 accelerator, allowing users to achieve a smooth and seamless upgrade transition between two generations of hardware. Intel said that the AI industry is turning to Python…- 2.9k
-
Beijing: Support will be provided to enterprises that purchase self-controlled GPU chips and provide intelligent computing services in proportion to their investment amount.
On the 24th, the Beijing Municipal Bureau of Economy and Information Technology and the Beijing Municipal Communications Administration issued the "Beijing Computing Infrastructure Construction Implementation Plan (2024-2027)". The "Implementation Plan" proposes that by 2027, the quality and scale of computing power supply in Beijing, Tianjin, Hebei and Mongolia will be optimized, and efforts will be made to ensure that independent and controllable computing power meets the needs of large model training, and computing power energy consumption standards will reach the leading domestic level. Key tasks include promoting independent innovation in the computing power industry, building an efficient computing power supply system, promoting the integrated construction of computing power in Beijing, Tianjin, Hebei and Mongolia, improving the green and low-carbon level of the intelligent computing center, deepening computing power empowerment industry applications, and ensuring computing power foundations...- 7.5k
-
Nvidia H100 AI GPU shortage eases, delivery time drops from 3-4 months to 2-3 months
Once upon a time, Nvidia's H100 GPU for artificial intelligence computing was in short supply. However, according to Digitimes, TSMC's Taiwan general manager Terence Liao said that the delivery waiting time for Nvidia H100 has been greatly shortened in the past few months, from the initial 3-4 months to the current 2-3 months (8-12 weeks). Server foundry manufacturers also revealed that compared to the situation in 2023 when Nvidia H100 was almost impossible to buy, the current supply bottleneck is gradually easing. Although the delivery waiting time has…- 8.9k
-
Stability AI reportedly ran out of money and couldn’t pay its rented cloud GPU bills
The massive GPU clusters required for generative AI star Stability AI’s popular text-to-image generation model, Stable Diffusion, also appear to have been partly responsible for former CEO Emad Mostaque’s downfall — because he couldn’t find a way to pay for them. The UK model-building firm’s sky-high infrastructure costs allegedly depleted its cash reserves, leaving it with just $4 million as of last October, according to an exhaustive report citing company documents and dozens of people familiar with the matter. Stab…- 5.2k
-
AI star startup buys Nvidia GPU, valuation doubles in a few weeks, but spends 17 times more than it earns
In the AI industry, especially in the field of generative AI, the rapid development of technology and the broad prospects of application have attracted a lot of investment and attention. However, the high cost of this field has also caused widespread discussion in the industry. Recently, a report in the Wall Street Journal pointed out that companies in the AI industry spend 17 times their revenue on purchasing Nvidia GPUs, which is a shocking figure and has also triggered in-depth thinking about the future development of the industry. AI startup Cognition Labs, backed by well-known investor Peter Thiel, is seeking a valuation of $2 billion, and its valuation…- 3.4k
-
NVIDIA AI chip H200 starts shipping, performance improved by 60%-90% compared to H100
On March 28, according to a report by the Nikkei today, Nvidia's cutting-edge image processing semiconductor (GPU) H200 is now available. H200 is a semiconductor for the AI field, and its performance exceeds the current flagship H100. According to the performance evaluation results released by Nvidia, taking the processing speed of Meta's large language model Llama 2 as an example, the processing speed of generative AI derived answers of H200 is up to 45% higher than that of H100. Market research organization Omdia once said that in 2022…- 5.8k
-
NVIDIA releases AI Enterprise 5.0 to help enterprises develop generative AI
NVIDIA has officially released AI Enterprise 5.0, an important product designed to help enterprises accelerate the development of generative artificial intelligence (AI). AI Enterprise 5.0 includes NVIDIA microservices and downloadable software containers that can be used to deploy generative AI applications and accelerate computing. It is worth mentioning that this product has been adopted by well-known customers such as Uber. As developers turn to microservices as an effective way to build modern enterprise applications, NVIDIA AI Enterprise 5.0 provides a wide range of...- 3.9k
❯
Search
Scan to open current page
Top
Checking in, please wait
Click for today's check-in bonus!
You have earned {{mission.data.mission.credit}} points today!
My Coupons
-
¥CouponsLimitation of useExpired and UnavailableLimitation of use
before
Limitation of usePermanently validCoupon ID:×Available for the following products: Available for the following products categories: Unrestricted use:Available for all products and product types
No coupons available!
Unverify
Daily tasks completed:























