Huawei just made a massive power play in the AI hardware wars. At Connect 2025 in Shanghai, rotating chairman Eric Xu unveiled the Atlas 950 and Atlas 960 SuperPoDs – absolutely enormous clusters of Ascend accelerators that are clearly designed to challenge Nvidia’s upcoming Rubin-era systems (although Nvidia AI chips are banned now in China) and push China’s AI capabilities to new heights.

Atlas 950 SuperPoD
The Atlas 950 SuperPoD is built around 8,192 Ascend 950DT chips, and the performance figures are genuinely impressive. We’re looking at 8 EFLOPS of FP8 compute power and 16 EFLOPS at FP4 precision, backed by a massive 16.3 PB/s of interconnect bandwidth.
In practical terms, this translates to some serious AI workload performance: 4.91 million tokens per second for training and 19.6 million tokens per second for inference. That’s a 17× and 26× improvement over their current Atlas 900 A3 system, which is no slouch itself.
The physical footprint is equally massive – we’re talking 160 cabinets all connected via Huawei’s new UnifiedBus 2.0 optical protocol. Huawei claims this interconnect is ten times faster than today’s internet backbone infrastructure, which is critical when you’re moving data between thousands of accelerators.
SuperCluster
Here’s where Huawei’s ambitions become clear. They’re planning to link 64 of these SuperPods into Atlas 950 SuperClusters, creating systems with more than 520,000 NPUs delivering 524 EFLOPS of FP8 compute.
Xu wasn’t shy about the competitive positioning either; he directly claimed this setup will outperform Elon Musk’s xAI Colossus and even Nvidia’s upcoming NVL144 and NVL576 deployments, citing six to seven-fold compute advantages plus superior memory and network throughput.
Atlas 960
But Huawei isn’t stopping there. The Atlas 960 SuperPoD, slated for 2026, essentially doubles every major specification: 15,488 Ascend 960 chips, up to 30 EFLOPS FP8, 60 EFLOPS FP4, and 34 PB/s bandwidth. The companion 960 SuperCluster is projected to hit 2 ZFLOPS of FP8 performance when it arrives in 2027, numbers that start to sound almost surreal.
New Ascend Roadmap
Behind all this hardware is a completely refreshed Ascend chip roadmap. The 950PR and 950DT arrive in 2026, featuring Huawei’s own HBM variants: HiBL 1.0 for cost-efficient prefill operations and HiZQ 2.0 optimized for decode and training workloads.
The 960 follows in 2027, then the 970 in 2028. Each generation promises to “double compute” while expanding support for emerging formats like FP8, MXFP4, and HiF4.
Scale-Over-Silicon Strategy
For a company that’s been cut off from advanced Western semiconductor fabs, Huawei’s strategy is becoming crystal clear: if you can’t win on per-chip performance, win through sheer scale and system integration.
By controlling the entire stack, memory subsystems, networking fabric, packaging, and interconnects, and by deploying hundreds of thousands of chips behind ultra-low-latency interconnects, they’re betting they can meet China’s massive demand for model training compute while seriously challenging Nvidia’s market position.
Reality Check
Of course, there’s always a gap between conference announcements and real-world deployments. These massive systems will face significant challenges around power consumption, thermal management, and perhaps most critically, software ecosystem maturity. Nvidia’s CUDA advantage didn’t happen overnight, and replicating that software stack optimization across such massive clusters is no small feat.
But from a pure ambition standpoint, Huawei has definitely fired a serious shot across the Pacific. Whether these enormous machines can deliver their promised performance outside of PowerPoint presentations will largely depend on solving real-world engineering challenges around power budgets, cooling infrastructure, and software optimization.
The AI hardware race just got a lot more interesting.
In related AI news, Tencent has recently launched a groundbreaking 3D AI modeling tool that’s completely free, while Baidu’s new PP-OCRv5 model outperforms larger rivals in OCR benchmarks.
For more daily updates, please visit our News Section.
Stay ahead in tech! Join our Telegram community and sign up for our daily newsletter of top stories! 💡
Comments