Chinese Universities Ordered 'Domestic' AI Chips. The Specs Only Match Nvidia's H200.
Procurement documents from seven Chinese institutions specify 141 GB VRAM and 4.8 TB/s bandwidth under the label "domestic chips." Only one chip on Earth matches those numbers. It is not made in China.
141 gigabytes. That is the VRAM specification that appears, again and again, across procurement documents filed by Chinese universities between October and December 2025. Jilin University listed it, Zhejiang University of Technology listed it, Jiangsu University listed it, and Southern University of Science and Technology listed it, each document independently describing the requested hardware as "domestic chips" while specifying memory configurations that not one domestic chip in production can match. Only Nvidia's H200 delivers 141 GB.
A Jamestown Foundation investigation published April 10, 2026, by analysts Sunny Cheung and Kai-shing Lau, reviewed tender documents from at least seven Chinese institutions hosted on the Zhiliao Tender Information platform. What they found is the most concrete evidence yet that China's "self-reliant" AI infrastructure depends on American silicon, and that the institutions building it know this perfectly well.
The Specifications Do Not Lie
Consider the numbers side by side.
| Chip | VRAM | Memory Type | Bandwidth | Origin |
|---|---|---|---|---|
| Nvidia H200 | 141 GB | HBM3e | 4.8 TB/s | USA (TSMC fab) |
| Nvidia H20 (export-legal) | 96 GB | HBM3 | ~4 TB/s | USA (TSMC fab) |
| Huawei Ascend 910B | 64 GB | HBM2e | ~1.8 TB/s | China (SMIC fab) |
| Huawei Ascend 950 (unreleased) | 144 GB | HiZQ 2.0 | TBD | China (announced Q4 2026) |
Henan University of Economics and Law tried a different approach. Its procurement documents listed "H20," the export-compliant Nvidia chip, but specified performance characteristics far exceeding what the H20 can deliver. China National Nuclear Corporation used the same obfuscation strategy in an October 31, 2025, filing. The pattern is consistent across institutions: write one thing on the label, require another thing in the specifications.
Jilin University did not bother with the pretense. Its December 16, 2025, procurement document explicitly named the H200, and Zhejiang University of Technology's November 11 filing did the same, both universities apparently concluding that specificity mattered more than plausible deniability within a domestic procurement system they assumed foreign investigators would never bother to read, a calculation that proved catastrophically wrong when Jamestown's analysts matched specification sheets to chip datasheets and published the results in a policy brief that landed on desks in Washington within forty-eight hours.
Measuring the Gap in Gigabytes
Here is the calculation that procurement documents make possible, and it is one that neither Beijing's technology planners nor Nvidia's investor relations team has an incentive to perform.
If an institution's AI workloads require 141 GB of VRAM per GPU partition, and the best deployed domestic alternative provides 64 GB, each task requires 2.2 Ascend 910B chips for equivalent memory capacity. But memory capacity is the generous comparison, because bandwidth tells the harsher story: the H200 moves data at 4.8 TB/s, while the Ascend 910B manages approximately 1.8 TB/s, a factor of 2.7. For large language model training, where memory bandwidth determines how quickly weights and activations shuttle between compute cores and memory, the effective throughput gap compounds to roughly 3x to 5x at cluster scale.
A 1,000-GPU H200 cluster would require approximately 2,200 Ascend 910B GPUs to match on memory alone, and those 2,200 chips would still train models three to five times slower because of the bandwidth wall. Software compounds the disadvantage: Nvidia's CUDA ecosystem has twenty years of library support, optimization tools, and developer familiarity, while Huawei's CANN framework is younger, less documented, and supported by a fraction of the community. Porting code from CUDA to CANN is not trivial, and several procurement documents specifically cited CUDA compatibility as a requirement.
Put simply: these institutions ordered H200s because nothing else would do the job.
$216 Billion in Revenue and a Senate That Noticed
Nvidia's fiscal year 2026 was extraordinary. Revenue hit $215.9 billion, a 65% year-over-year increase, with Q4 data center revenue alone reaching $62.3 billion. Q1 FY2027 guidance came in at $78 billion. But the company's earnings call contained a remarkable qualifier: Nvidia assumed zero data center compute revenue from China for the coming quarter.
Zero is a strange number when your largest foreign customer has spent the past decade building AI infrastructure around your products.
Policy is catching up to the contradiction, and it arrived in the form of legislation: on April 2, 2026, Senators Tom Cotton and Mark Warner pushed an amendment through the NDAA targeting $15 billion in annual AI chip exports to China, and Nvidia's stock dropped 3% the same day. The amendment requires third-party testing labs and Know Your Customer procedures for every export-licensed shipment. In January, the Trump administration had partially relaxed H200 restrictions under case-by-case review, with a 25% surcharge and performance caps at 21,000 TOPS and 6,500 GB/s DRAM bandwidth. The Senate bill signals that the relaxation may not survive the next legislative cycle.
Meanwhile, China's domestic chip makers are collecting the dividends of restriction. Cambricon Technologies, which makes AI inference chips, reported a 43x revenue surge to $413 million (2.88 billion yuan) in its latest fiscal year. The ban is generating exactly the industrial policy response American hawks feared and Chinese planners intended.
The Ascend 950 Question
At Huawei's March 2026 China Partner Conference, the company unveiled the Atlas 350 server powered by the Ascend 950PR, claiming 2.87x the compute power of Nvidia's export-legal H20. Announced specifications for the Ascend 950 are competitive on paper: 144 GB of HiZQ 2.0 memory, 1 PFLOPS of FP8 compute, and an expected Q4 2026 release.
If these numbers hold, and if the chip ships on schedule, the memory gap effectively closes, but that is a significant "if" given that Huawei's previous generation, the Ascend 910B, shipped with lower-than-announced bandwidth in some configurations. Volume production on advanced nodes at SMIC remains constrained, and "1 PFLOPS FP8" is a peak theoretical number that depends on workload characteristics, memory access patterns, and a software maturity that Huawei has never demonstrated outside controlled benchmarks.
But the strongest case against this article's thesis deserves full weight: the procurement documents are from Q4 2025, capturing a specific moment in a rapidly evolving technology race. If the Ascend 950 delivers on its specifications and Huawei's software ecosystem matures, the gap these documents reveal may narrow dramatically within 18 months. Dismissing it would be unwise. Chinese technology planning operates on decade-long horizons, and the current dependency on American chips does not necessarily predict permanent dependency.
Limitations
Procurement documents are a sample, not a census. We do not know how many orders were fulfilled versus merely attempted. Gray market pricing for restricted Nvidia chips inside China is opaque, so the actual premium paid above list price cannot be calculated from public records. SUSTech has been flagged by ASPI for military-civil fusion research programs, but the procurement documents alone do not prove that chips acquired for university research were diverted to military applications. Finally, Huawei's announced Ascend 950 specifications may prove accurate, and this analysis will need revision when independently verified benchmarks become available.
What You Can Do
If you work in semiconductor policy or export control enforcement: Procurement documents on platforms like Zhiliao are public records within China's domestic system. Systematic monitoring of these filings, cross-referenced against export license applications, would create a verification layer that currently does not exist. The Jamestown Foundation analysts demonstrated the method, and scaling it requires institutional commitment, not novel technology.
If you invest in or analyze Nvidia: Watch the gap between "zero China revenue assumed" in guidance and actual revenue recognition. If H200 exports resume under case-by-case licensing, the 25% surcharge represents incremental margin on hardware that sells at capacity. If the Senate amendment becomes law, the $15 billion annual figure provides a ceiling for modeling lost revenue. Neither scenario justifies the zero assumption beyond a single quarter.
If you follow China's AI development: Track the Ascend 950 benchmarks when independent tests emerge, likely early 2027. Compare MLPerf training results, not press conference claims. Memory bandwidth and CUDA ecosystem parity matter more than peak FLOPS for real workloads. Cambricon's revenue trajectory is a better proxy for domestic chip adoption than Huawei's announcements.
Bottom Line
Seven Chinese institutions wrote "domestic chips" on their procurement forms and then specified hardware that only Nvidia makes. That is not a policy failure or an intelligence finding. It is an accounting entry, repeated across multiple universities, traceable to specific dates, and verifiable by anyone with access to a Chinese procurement database. The gap between China's AI ambitions and its actual chip capabilities is not a matter of political rhetoric. It is 141 gigabytes wide, 4.8 terabytes per second fast, and documented in the purchasing records of the institutions building the future of Chinese AI.
Sources
- Jamestown Foundation, China Brief: Behind China's AI Facade — Procurement Records Reveal Deep Dependence on US Chips (April 10, 2026)
- Nvidia Q4 FY2026 Earnings: Record $68.1B Q4 revenue, $62.3B data center (February 25, 2026)
- Caixin Global: U.S. eases H200 restrictions under case-by-case review (January 14, 2026)
- TechRadar: Huawei Ascend 950 specifications — 144GB HiZQ 2.0, 1 PFLOPS FP8 (October 2025)
- Australian Strategic Policy Institute (ASPI): SUSTech military-civil fusion research designations
- U.S. Senate NDAA Amendment: Cotton-Warner AI chip export controls targeting $15B annual exports (April 2, 2026)
- IndexBox/Federal Register: Export rule details — third-party testing, KYC procedures, 25% surcharge (January 2026)