Advertisement

According to TrendForce, Nvidia is working on the next generation of B100 and B200 GPUs that are based on the Blackwell architecture. The new GPUs are expected to hit the market in the second half of this year and will be used for CSP cloud customers. For those unfamiliar, “CSP cloud customers” refers to customers who use Cloud Service Providers (CSPs) for their cloud computing needs. The company will also add a streamlined version of the B200A for OEM enterprise customers with edge AI needs.

It is reported that TSMC‘s CoWoS-L packaging capacity (that is used by the B200 series) continues to be tight. It is mentioned that the B200A will switch to the relatively simple CoWoS-S packaging technology. The company is focusing on the B200A to meet the requirements of the Cloud Service Providers.

B200A technical specifications:

Unfortunately, the technical specifications of the B200A are yet to be fully clear. For now, it can only be confirmed that the capacity of HBM3E memory is reduced from 192GB to 144 GB. It is also reported that the number of layers of the memory chip is halved from eight to four. Nonetheless, the capacity of a single chip is increased from 24GB to 36 GB.

Source: TrendForce

The power consumption of the B200A will be lower than the B200 GPUs and it will not require liquid cooling. The air cooling system of the new GPUs will also make them more easy to set up. B200A is expected to be supplied to OEM manufacturers around the second quarter of next year.

Supply chain surveys show that NVIDIA’s main high-end GPU shipments in 2024 will be based on the Hopper platform, with H100 and H200 for the North American market and H20 for the Chinese market. As the B200A will become available around the second quarter of 2025, it is expected to not interfere with the H200, which will arrive on or after the third quarter.

(Source)

Comments