AMD released a new graphics card, the MI300. Shortly after, Nvidia claimed that its own graphics card, the H100, is much faster. What does this dispute mean, even though these graphics cards are not for gamers?
A few days ago, on December 6th, AMD launched its new AI graphics card, the MI300 series. At the same time, it claimed to offer the fastest AI graphics card on the market and to beat the competition significantly. However, Nvidia did not take this lying down and stated that AMD was boasting with incorrect figures.
But why is this discussion and these AI GPUs so important that heavy artillery has to be deployed?
Two AI Graphics Cards Compete for the Top Spot
What is this GPU anyway? With the “Instinct MI300” series, consisting of two GPUs, AMD has introduced graphics cards for AI computations. The main competitor is Nvidia’s H100 and the new H200, which is expected to be released sometime in 2024.
The industry has been waiting for an alternative to Nvidia for a long time, and there is a good reason for that:
- For months, prices have risen sharply, as Nvidia is struggling to keep up with production due to high demand.
- Many are therefore waiting for a promising alternative to avoid having to buy the few available models at horrendous prices.
- Especially the restrictions on the Chinese market have negatively impacted price developments for graphics cards.
What is behind the discussion? In its presentation, AMD stated that the MI300X is on par with the H100 in training and outperforms the H100 by 10-20% in inference. This is reported by colleagues from Forbes.
Nvidia responded with a corresponding blog post. Here, Nvidia explained, somewhat tersely, that AMD did not use optimized software for its shared data (via Nvidia.com):
At a recently held launch event, AMD discussed the inference performance of the H100 graphics processor compared to its MI300X chip. No optimized software was used for the shared results, and the H100 is 2x faster with proper benchmarking.
In this blog post, AMD’s claim that the latest chip, the MI300X, is to be 40-60% faster than Nvidia’s in generative AI inference processing is refuted. Nvidia explained that AMD simply used outdated software to generate the data. This can also be seen in the footnotes, where AMD would name the version of the program.
The direction is quite clear: Not only is a fast AI chip needed, but also optimized software so that the full computing power can be utilized.
AI Has Been Gaining Importance for Months
Why is AI so important for Nvidia and AMD? Nvidia has been actively investing in chips capable of handling AI computations for years.
The modern GeForce RTX graphics cards contain AI-specific Tensor Cores. These are used, for example, to perform ray tracing or DLSS via AI computations. And in large data centers, such as for ChatGPT, Nvidia GPUs are also used.
Therefore, there is also a large financial potential behind AI technology, which is also of interest to AMD. That is why they are also investing in the development of AI chips to capture at least some market share from the competition. There are currently rapid developments to stay ahead of the competition.
Next year, the situation is likely to look different again: With Nvidia’s H200 and a new AI graphics card from Intel, there will likely be strong competition again.
For this reason, the dispute over the fastest AI graphics card is so important: In the end, it is about a lot of money, which is also significant for investors holding AMD and Nvidia stocks.
A new processor could become a hidden gem for gamers – not coming from either AMD or Intel