The previously elusive NVIDIA data center GPU, H100, has seen a noticeable reduction in delivery lead times amid improved market supply conditions, as per a report from Tom’s Hardware. As a result, customers who previously purchased large quantities of H100 chips are reportedly starting to resell them.
The report further points out that the previously high-demand H100 data center GPU, driven by the surge in artificial intelligence applications, has seen a reduction in delivery wait times from a peak of 8-11 months to 3-4 months, indicating a relief in supply pressure.
Additionally, with major cloud providers such as AWS, Google Cloud, and Microsoft Azure offering easier access to AI computing services for customers, enterprises that previously purchased large quantities of H100 GPUs have begun further reselling these GPUs.
For instance, AWS introduced a new service allowing customers to rent GPUs for shorter periods, resolving previous chip demand issues and shortening the waiting time for artificial intelligence chips.
The report also indicates that customers are reselling these GPUs due to reduced scarcity and the high maintenance costs, leading these enterprise customers to make such decisions. This situation contrasts starkly with the market shortage a year ago.
However, even though the current difficulty in obtaining H100 GPUs has significantly decreased, the artificial intelligence market remains robust overall. The demand for large-scale artificial intelligence model computations persists for some enterprises, keeping the overall demand greater than the supply, thereby preventing a significant drop in the price of H100 GPUs.
The report emphasizes that the current ease of purchasing H100 GPUs has also brought about some changes in the market. Customers now prioritize price and practicality when leasing AI computing services from cloud service providers.
Additionally, alternatives to the H100 GPU have emerged in the current market, offering comparable performance and software support but at potentially more affordable prices, potentially contributing to a more equitable market condition.
TrendForce’s newest projections spotlight a 2024 landscape where demand for high-end AI servers—powered by NVIDIA, AMD, or other top-tier ASIC chips—will be heavily influenced by North America’s cloud service powerhouses.
Microsoft (20.2%), Google (16.6%), AWS (16%), and Meta (10.8%) are predicted to collectively command over 60% of global demand, with NVIDIA GPU-based servers leading the charge.
However, NVIDIA still faces ongoing hurdles in development as it contends with US restrictions.
TrendForce has pointed out that, despite NVIDIA’s stronghold in the data center sector—thanks to its GPU servers capturing up to 70% of the AI market—challenges continue to loom.
Three major challenges are set to limit the company’s future growth: Firstly, the US ban on technological exports has spurred China toward self-reliance in AI chips, with Huawei emerging as a noteworthy adversary. NVIDIA’s China-specific solutions, like the H20 series, might not match the cost-effectiveness of its flagship models, potentially dampening its market dominance.
Secondly, the trend toward proprietary ASIC development among US cloud behemoths, including Google, AWS, Microsoft, and Meta, is expanding annually due to scale and cost considerations.
Lastly, AMD presents competitive pressure with its cost-effective strategy, offering products at just 60–70% of the prices of comparable NVIDIA models. This allows AMD to penetrate the market more aggressively, especially with flagship clients. Microsoft is expected to be the most enthusiastic adopter of AMD’s high-end GPU MI300 solutions in 2024.
Read more
(Photo credit: NVIDIA)