
Redazione RHC : 22 September 2025 07:25
China’s Huawei has taken a major step in developing its own artificial intelligence infrastructure. The company has unveiled solutions designed to increase computing power and reduce dependence on foreign technologies . This move is particularly significant after Chinese regulators imposed restrictions on local companies’ purchases of Nvidia AI accelerators . Huawei is now focusing on its own developments and new chip-fusion techniques.
The key announcement was the SuperPoD Interconnect technology, which enables the integration of up to 15,000 accelerators, including Huawei’s proprietary Ascend chips. This solution is reminiscent of Nvidia’s NVLink system, which enables high-speed communication between AI chips. Essentially, it creates scalable clusters capable of operating as a single processing center.
This capability is critical for the Chinese company. While the performance of individual Ascend chips is inferior to that of Nvidia solutions , their clustering helps compensate for this shortcoming and provide customers with the power needed to train modern AI models.
As we have often reported on these pages , the effect of US sanctions is fueling a race for proprietary technology in both Russia and China , leading the two superpowers to become autonomous in the technological world .
Cyber politics and national security are now essential for every region of the modern geopolitical landscape. Only time will tell whether the US sanctions-based policy, which began years ago, was flawed. Indeed, for the superpowers, these sanctions provided impetus to stimulate domestic industries and generate new opportunities, and artificial intelligence was no exception.
It goes without saying that today, “technological autonomy” and “domestic technologies” have become a major topic of political discussion in both China and Russia, when other countries are not even aware of the concept’s existence.
Huawei has unveiled two new systems: the Atlas 950 SuperCluster and the Atlas 960 SuperCluster. The former uses over 500,000 Ascend neural processing units (NPUs), while the latter uses over 1 million. The Atlas 950 is scheduled for launch next year. According to the company, its computing power will be 1.3 times that of Elon Musk’s Colossus supercomputer.
Another important claim concerns the Atlas 950 node, which includes 8,192 Ascend accelerators. Huawei claims that its performance is 6.7 times higher than that of Nvidia’s NVL144 . This means that the company is aiming for enormous scalability: even if a single chip is weaker , the combined system can outperform competitors.
Huawei intends to develop the Ascend line and plans to release three new generations of these chips by 2028. Each upgrade is expected to double computing power. At the same time, the company is working on Kunpeng server processors to offer a full range of AI-based computing solutions.
Interestingly, Huawei has already unveiled an alternative to the Nvidia GB200, the NVL72 . Its CloudMatrix 384 system, powered by 384 Ascend 910C accelerators, offers 300 petaflops of computing power, compared to the competitor’s 180 petaflops. For Chinese customers, this represents a significant argument in favor of switching to domestic technologies.
| System/chip | Number of accelerators | Power (petaflops) | Competitor |
| SuperPoD Interconnection | up to 15,000 | It depends on the configuration | Nvidia NVLink |
| Atlas 950 Supercluster | >500,000 | 1.3 times higher than Colossus | Colossus (Elon Musk) |
| Atlas 960 | >1,000,000 | not yet revealed | – |
| Atlas 950 (8192 Ascend) | 8192 | 6.7 times higher than NVL144 | Nvidia NVL144 |
| CloudMatrix 384 (Ascend 910C) | 384 | 300 | Nvidia GB200 NVL72 (180) |
What if Huawei were to realize its plans for the third-generation Ascend by 2028? China could equip itself with its own large-scale AI infrastructure, independent of Nvidia and TSMC. This would not only reduce dependence on imported technologies but also create competition in the global market.
| Pros of Huawei | Against Huawei |
| Independence from Nvidia | Lags behind individual chips in terms of power |
| Cluster scalability | High development complexity |
| Active development of the line | Dependence on the Chinese market |
| CloudMatrix Competitiveness | Limited supply outside of China |
Redazione