New Nvidia Blackwell GPUs put China further behind global leading edge in AI chips amid US sanctions

Applying for Patent and actual mass production with enough numbers of systems is not the same thing.
That's your system. This is China speed. What you need to know is 'IT' will be surprising.
 
It will take beyond 2030s to close the gap. China is still developing indeginous 28nm lithography machine which is supposed to be delivered this year. While West already has the 3nm fabrication technology.
China is building chip factory capable to produce 2nm or even 1 nm chip with SMB euv lithography in Xiong an target to finish in 2025.
 
Btw we all know LLM is all hype, most of the LLM start-up and companies are just burning investor's money and make little return, actually in China, AI and cloud computing resource providers are offer large discounts to potential LLM customers, the price is even below their costs.
LLMs are NO hype. I am using LLMs accelerated development and it accelerates development by two orders of magnitude. Our release cycle has gone from six months to a week. Most of our DevOps operations have become automated. These days we spend more time in CRs than writing the actual config changes. It is more than real.

I also know other teams who have replaced more than 90-95% of support ticket tasks using LLMs fine tuned on their knowledge base and older tickets.

What is rather curious is for all the "patent leaders in AI", China has not generated ANY real breakthrough in ML.

AlexNet : Canadian
GoogleNet : American
VGGNet : American
ResNet : American
YOLO Detector : American

Diffusion Model : American
Stable Diffusion : American
Flux : American

Transformer Model : American
GPT : American
GPT with reasoning and reinforcement learning : American
 
Last edited:
Does Huawei even have a GPU that works properly with Tensorflow? PyTorch? Moreover, a number of top frameworks are tightly integrated with CUDA and not OpenCL. So does not matter what Huawei manufactures, it will be very long time before something like Chinese version of Tensorflow that will challenge use of NVDIA hardware in modern Deep neural network research.

Not to mention, Huawei's performance is so anemic that it is not even worth looking at.

1726503575046.png
Seriously? FP16 of just 512 Teraflops? In 2025? Thats a joke right? Even 2020 Ampere architecture is much ahead of that. And that has YET to hit the market and get benchmarked by independent sources.
 

Users who are viewing this thread

Pakistan Defence Latest

Country Watch Latest

Back
Top