China’s top data official said on March 23 that the country’s daily AI token usage has surpassed 140 trillion as of this month, up from 100 billion at the start of 2024 and 100 trillion at the end of 2025, offering the clearest official picture yet of how fast AI demand is scaling inside the world’s second-largest economy. The significance is not just the size of the number. In remarks carried by Chinese state-backed media, Liu Liehong, head of the National Data Administration, also described tokens as a “settlement unit” linking technical supply and commercial demand — a sign that China is increasingly framing AI as metered infrastructure rather than only a model race.
China now has an official AI throughput number
The most important thing about the 140 trillion figure is not that it settles any global leaderboard. It does not. The reporting around Liu’s speech does not disclose a full methodology, a platform list, or whether the count covers only inference calls or a wider universe of model activity. That means readers should treat it as an official Chinese policy-and-industry signal, not as an audited and internationally standardized metric.
Even with that caveat, the number matters because it gives China’s AI economy something it has not had publicly before: a national-scale throughput indicator. AI coverage often revolves around model rankings, benchmark scores, funding rounds or the fortunes of individual companies. Liu’s sequence instead sketches a demand curve: 100 billion daily tokens at the start of 2024, 100 trillion by the end of 2025, and more than 140 trillion in March 2026. That is a jump of more than three orders of magnitude in just over two years, and it suggests that AI use in China is no longer concentrated in demos, pilots or a narrow set of cloud users.
For an English-language audience, that is the real news hook. China is putting a macro number on AI usage rather than talking only about model capability. In practical terms, that makes the story less about which lab has the best frontier model and more about how much AI is being consumed, routed and potentially paid for across the wider economy.
“Token as settlement unit” is the bigger signal behind the headline
Liu’s most revealing line may be his description of the token as a settlement unit between technical supply and commercial demand. That wording matters because it shifts the conversation from raw AI enthusiasm to measurement, billing and business logic. A token is not only a technical concept describing how large models process text and other data. In this framing, it also becomes a unit through which usage can be priced, revenue can be tracked, and AI services can be folded into everyday commercial activity.
That is why the 140 trillion figure reads as more than a usage spike. It points to a market that is moving toward billable throughput. Tencent News, citing Cailian Press coverage of the same speech, highlighted Liu’s claim that some model companies have generated more revenue in 20 days since late January than in all of 2025. The speech did not identify those companies, so that line should not be stretched into a claim about any single vendor. But as an official talking point, it clearly signals that Chinese policymakers and industry observers think monetization is no longer theoretical.
Taken together, the token-usage data and the revenue comment suggest a simple but important shift. China’s AI race is no longer being narrated only as a matter of model launches, open-source buzz or one-off viral products. It is increasingly being narrated as an economy of calls, billing, distribution and payment flows. In other words, the attention is moving from model existence to model throughput.
This is also a data-market and infrastructure story
The other reason the speech deserves attention is that Liu did not present the token data in isolation. Reports from China News Service and IT Home tie the usage surge to a wider policy package around data supply, embodied AI, computing infrastructure and electricity.
Liu said the National Data Administration has designated 2026 as a “data value release year,” with a focus on building higher-quality datasets that are better suited to training advanced models and agents and to solving real industry problems. That matters because large token volumes are only one side of the AI equation. If China wants AI usage to keep rising in a commercially useful way, it also needs better data inputs, more reliable deployment environments and a broader set of industry workflows that can absorb model output.
The same speech also pointed to embodied-AI data collection and to a tighter coupling of computing and power systems. IT Home’s report says the administration wants new computing facilities at hub nodes to use more than 80% green electricity, while China’s national integrated computing network is being folded into the next five-year planning cycle. Those details may look secondary next to the 140 trillion headline, but they actually help explain why the number matters. Beijing is signaling that AI demand at scale will require not just stronger models, but also a more organized stack of data, energy and compute coordination.
That is the larger analytical frame behind the story. China appears to be treating AI as an infrastructure category with upstream inputs and downstream billing logic, not simply as a prestige technology sector. The national token figure is therefore interesting not because it is huge in isolation, but because it is being used to support a broader industrial narrative: higher data readiness, stronger commercialization signals and more deliberate coordination between compute growth and power supply.
What changed, and what may happen next
What changed this week is that China’s AI boom gained an official scale marker. Before this, outside observers had plenty of signals that Chinese AI activity was rising: more model launches, broader enterprise adoption, aggressive price competition and rapid experimentation in consumer and industrial settings. What they did not have was a clear, top-level public number meant to summarize national usage in a single unit. Liu’s remarks filled that gap.
What may happen next is that token-based thinking becomes even more central to how China talks about AI progress. If policymakers, cloud providers, model companies and enterprise buyers increasingly use tokens as the shared language of pricing and demand, then future competition will be judged less by splashy releases alone and more by sustained usage, revenue conversion, dataset quality and infrastructure efficiency. That would mark a meaningful change in the shape of the market.
The key caution is that none of this means the 140 trillion figure should be read as a fully comparable global scoreboard. It is still an official forum statement repeated by Chinese state-backed and business media, with important methodological details left unstated. But even as a directional measure, it is revealing. It shows that China wants the world to understand its AI push not only as a race to build models, but as a race to turn AI into a metered, monetizable and industrial-scale system. That framing, more than the headline number alone, is what makes this development worth watching.
Related coverage on 1M Reviews
- OpenRouter Data Shows Chinese AI Models Overtaking U.S. in Weekly API Usage for Two Straight Weeks
- Alibaba breaks out Token Hub as Wukong pushes AI agents into enterprise workflows
- Alibaba Sets $100 Billion Cloud-and-AI Revenue Goal as China’s AI Race Turns Toward Monetization
Sources
-
China News Service — Liu Liehong says China’s daily AI token usage has surpassed 140 trillion
– https://www.chinanews.com.cn/cj/2026/03-23/10591500.shtml
– Key use in this article: Confirms the 100 billion → 100 trillion → 140 trillion sequence, the more-than-1,000x growth framing, and the broader comments on AI industry scale and data supply. -
IT Home — Token as a “settlement unit,” data-value release year, and green-power / compute coordination
– https://www.ithome.com/0/931/873.htm
– Key use in this article: Supports the settlement-unit framing, the 2026 “data value release year” language, and the push for power-compute coordination with over 80% green electricity at new hub-node facilities. -
Tencent News / Cailian Press — Commercialization signal behind the token surge
– https://news.qq.com/rain/a/20260323A06PN300
– Key use in this article: Supports Liu’s claim that some model companies generated more revenue in 20 days since late January than in all of 2025, reinforcing the monetization angle.
Reporting Notes
- Search-chain note: This draft follows the source brief’s documented reporting chain. The original research path was grok-search → local-web-search → browser fallback. According to the upstream source brief, grok-search returned a 502 error, local-web-search did not produce usable live results within the cron window, and browser/manual verification was used only as the final fallback for finding links and cross-checking details.
- Editorial caveat: The 140 trillion daily-token figure is presented here as Liu Liehong said / Chinese state-backed media said. It should not be treated as a third-party audited or globally standardized benchmark, because the external methodology and coverage scope have not been fully disclosed. The unnamed model-company revenue example is likewise used only as an official commercialization signal, not as a verified claim about any specific company.