At the Shanghai Global Investment Promotion Conference on March 14, 2026, city officials said Shanghai will build what it calls the country’s largest heterogeneous computing scheduling platform and issue 10 billion yuan a year in compute vouchers, giving firms access to roughly 140,000 P (petaflops) of aggregated capacity on a “use first, pay later” basis. The city framed the policy as a fix for a structural imbalance in China’s AI boom: large firms struggle to secure enough compute for model training, while small and midsize companies face high prices and uncertain supply.
The platform is positioned as a city‑level traffic controller for computing resources, pooling heterogeneous capacity across providers into a unified scheduling system rather than letting each data center operate in isolation. Officials said the pool will cover 140,000 P of heterogeneous compute, a scale that, if delivered, would make it the largest city‑wide platform of its kind in China. The public description suggests Shanghai wants to turn compute into a shared infrastructure layer rather than a fragmented, vendor‑locked asset.
The voucher program is the demand‑side lever. Shanghai said it will distribute 10 billion yuan in compute vouchers every year and allow companies to access subsidized resources with a “use first, pay later” and “automatic access, no application required” model. That design reduces the procurement bottleneck for startups and mid‑sized firms that often lack the cash flow or credit to secure long‑term GPU contracts, while still giving larger enterprises a mechanism to offset training costs.
Why a heterogeneous scheduling platform matters is that modern AI workloads rarely run on a single type of hardware. Large‑model training mixes GPU clusters, CPU servers and specialized accelerators, and data residency or compliance rules can push workloads across multiple local providers. By centralizing scheduling, Shanghai is trying to reduce idle capacity and make it easier for companies to find the right compute mix without negotiating separately with each provider, a friction point the city explicitly called out.
The policy also lands in the context of China’s rapidly expanding compute base. The China Academy of Information and Communications Technology (CAICT) reported that China’s total compute capacity reached 962 EFlops (FP32) by June 2025, with intelligent compute accounting for 81% of the total. Against that national figure, Shanghai’s 140,000‑P pool is a city‑level slice, but it signals how local governments are translating national scale into usable, purchasable capacity for companies.
Shanghai’s move fits a broader policy pattern: local governments are shifting from one‑off subsidies toward platformized infrastructure and usage credits. Compute vouchers function like demand‑side subsidies, encouraging companies to run workloads inside the city’s ecosystem, while the scheduling platform creates a standardized marketplace for capacity across state‑owned and private providers. This approach can also make it easier to track utilization, which is important when AI investment scrutiny is rising.
Another operational implication is procurement visibility. A centralized scheduler can force providers to compete on price, latency, and accelerator type inside the platform rather than through one‑off contracts, which could lift utilization rates for expensive GPU clusters while squeezing idle capacity. If the platform truly aggregates 140,000 P, it effectively becomes a market maker for compute in Shanghai, standardizing service‑level terms and reducing the time it takes to spin up new training runs. It also gives the city a clearer view of which sectors are consuming compute and where vouchers flow, making it easier to steer subsidies toward priority industries such as manufacturing, finance, or healthcare.
For AI companies, the immediate impact is a lower barrier to experimentation. With “use first, pay later” access to a public pool, firms can prototype or fine‑tune models without locking into expensive, long‑term contracts. The bigger strategic impact is that Shanghai is signaling it wants to become a default location for model training and AI deployment in China by making compute availability a policy guarantee rather than a market gamble.
What changed is that Shanghai is no longer treating compute as a background utility; it is using a large, voucher‑backed scheduling platform to actively reshape access and pricing for AI workloads. What could happen next is that other major Chinese cities respond with rival compute‑credit programs and their own aggregated platforms, pushing China’s AI infrastructure competition from isolated data centers to city‑level compute marketplaces.
Sources
Core sources:
- https://www.stcn.com/article/detail/3677208.html
- https://m.cls.cn/detail/2312971
- https://www.yicai.com/news/103086145.html
- https://www.36kr.com/newsflashes/3722468983814785
- https://www.caict.ac.cn/kxyj/qwfb/bps/202603/P020260306392232241580.pdf
Word count (English body): 727