Nvidia's CEO Jensen Huang stated that the company plans to sell its Blackwell GPU accelerator for AI and HPC workloads at a price of $30-40k. However, this is an approximate price as Nvidia is more inclined to sell the entire data center component stack, not just the GPU accelerator.
The performance of the Nvidia B200 accelerator based on Blackwell with 192GB of HBM3E memory is undoubtedly impressive. However, these figures are achieved thanks to the chiplet design with two chips, which contains 204 billion transistors (104 billion on the die). Such a solution will be significantly more expensive than the single-chip GH100 accelerator with 80GB of memory. According to Raymond James analysts, each H100 costs around $3100, while each B200 should cost around $6000.
The development of the GB200 was costly, with Nvidia's expenses on modern architecture and GPU design exceeding $10 billion, according to the company's CEO.
Last year, Nvidia partners were selling the H100 for $30-40k when demand for these accelerators was at its peak and supply was limited by TSMC's manufacturing capabilities.
It should be noted that Nvidia actually has no desire to sell B200 modules or cards. Instead, it may be much more inclined to sell DGX B200 servers with eight Blackwell GPUs or even DGX B200 SuperPODs with 576 B200 GPUs for millions of dollars each.
Jensen Huang emphasized that the company would prefer to sell supercomputers or DGX B200 SuperPODs with a large amount of hardware and software that have premium prices. Therefore, the company does not list B200 cards or modules on its website, only DGB B200 systems and DGX B200 SuperPOD systems.
Source: tomshardware
Comments (0)
There are no comments for now