State of the Art: Tracking China’s Race to Achieving its Artificial Intelligence 2030 Plan
- Bryan White
- Jan 16
- 20 min read

1. Introduction: China Joins the Dawn of the Intelligent Era
As the global community stands at the precipice of what historians and economists are increasingly calling the "Intelligent Era," the People's Republic of China has orchestrated a massive, state-directed mobilization to secure leadership in artificial intelligence (AI). This is not merely a technological pursuit; it is a grand strategic endeavor that intertwines national security, economic revitalization, and geopolitical influence. This report provides an exhaustive examination of China’s strategic roadmap as of early 2026, analyzing the nation’s progress against the milestones set forth in the seminal New Generation Artificial Intelligence Development Plan of 2017. By synthesizing data on large language model (LLM) performance, semiconductor sovereignty efforts, the colossal "East Data West Computing" infrastructure megaproject, and global talent flows, this analysis reveals a complex, bifurcated landscape defined by efficiency-driven innovation and sheer infrastructural scale.
The genesis of this intense drive can be traced back to what is often termed China’s "Sputnik moment": the defeat of the Go master Ke Jie by Google DeepMind’s AlphaGo in May 2017. This event served as a visceral demonstration of Western algorithmic superiority in a game deeply embedded in Chinese culture and strategy. It galvanized the Chinese leadership, transforming AI from a promising industrial sector into a paramount national priority. In the ensuing years, the geopolitical environment has shifted radically. The optimism of global technological integration has been replaced by the friction of the "Chip War"—a concerted effort by the United States and its allies to restrict China’s access to the advanced computing hardware necessary to train frontier AI models.
Yet, as we navigate through 2026, the narrative of Chinese stagnation has proven premature. The ecosystem has demonstrated a remarkable "anti-fragility." 2025 marked a watershed year, characterized by the release of "thinking models" like DeepSeek-R1 and Alibaba's Qwen-3 that challenged the hegemony of Western counterparts like GPT-4 and OpenAI’s o1 series. Simultaneously, the activation of the world's largest distributed computing network in December 2025 signaled that China is attempting to solve the "compute gap" through infrastructure scale rather than just silicon density. This report argues that China’s strategy has pivoted from pure replication to "efficiency-driven innovation," leveraging architectural optimizations and massive infrastructure investments to offset limitations in access to cutting-edge extreme ultraviolet (EUV) lithography.
In the pages that follow, we will dissect the mechanisms of China’s AI rise. We will move beyond the headlines to explore the architectural innovations allowing Chinese firms to do more with less, the engineering feats reshaping the country’s western hinterlands into data reservoirs, and the intricate interplay between state planning and market competition that defines the "Chinese Model" of AI development.
2. Grand Strategy: The Policy Ecosystem and the "AI Plus" Pivot
China’s AI trajectory is guided by a "whole-of-nation" approach, a governance model where central directives cascade down to provincial governments, state-owned enterprises (SOEs), and private technology giants. To understand the current landscape, one must first deconstruct the foundational milestones and the newer, more granular policies that have emerged to support them.
2.1 The New Generation AI Development Plan (2017-2030)
The New Generation Artificial Intelligence Development Plan (AIDP), released by the State Council in July 2017, remains the north star of Chinese AI policy. Unlike Western policy documents which often function as guidelines or recommendations, the AIDP established a rigid timeline with specific quantitative and qualitative goals that have been adhered to with surprising fidelity.1 The plan was structured around four basic principles: technology-led breakthroughs, systematic strategic layout, market-dominant commercialization, and an open-source ethos.1
The roadmap was divided into three distinct phases, each serving as a stepping stone to the next:
Step 1: Synchronization (2020)
The primary objective of the first phase was for China to "keep pace" with global advanced levels in AI technology and applications. By 2020, the goal was to establish a robust ecosystem of AI enterprises and achieve ubiquitous adoption in specific sectors. Retrospective analysis confirms this was largely achieved. By 2020, China had birthed a generation of "AI Unicorns"—companies like SenseTime, Megvii, and iFlytek—and had integrated AI deeply into its surveillance, fintech, and consumer mobile ecosystems. The "keeping pace" metric was met, particularly in computer vision and speech recognition, where Chinese researchers began to top global leaderboards.
Step 2: Breakthroughs and Industrial Integration (2025) We are currently living through the culmination of the second phase. The specific targets set for 2025 were ambitious: China was to achieve "major breakthroughs" in AI basic theories and for AI to become the "main driving force" for China’s industrial upgrading.1 The plan stipulated that by 2025, the core AI industry scale should exceed 400 billion RMB (approximately 55 billion USD), with related industries topping 5 trillion RMB (approximately 690 billion USD).2 This phase emphasized the transition from "application" to "innovation"—moving from using AI to defining the underlying theories and standards. As detailed later in this report, the architectural innovations seen in 2025 regarding model efficiency suggest this goal of "theoretical breakthroughs" is being pursued with vigor, necessitated by hardware constraints.
Step 3: Global Primacy (2030) The ultimate objective is for China to become the world’s "primary center for AI innovation" by 2030. This envisions a scenario where China leads not just in market size, but in the definition of the technology itself—leading in theory, technology, and application. The economic target for this phase is a core industry value of 1 trillion RMB, with related industries reaching 10 trillion RMB.1 The language here is explicit: the goal is not parity, but supremacy.
2.2 The "AI Plus" Initiative (2024-2025)
Recognizing that theoretical leadership means little without deep economic integration, the Chinese government launched the "AI Plus" initiative, which was heavily emphasized in the 2024 Government Work Report and elaborated upon in subsequent 2025 guidelines.3 This initiative represents a strategic evolution from the "Internet Plus" strategies of the 2010s.
While "Internet Plus" focused on connecting consumers—digitizing retail, payments, and services—"AI Plus" targets the "real economy." It is a supply-side revolution aimed at manufacturing, agriculture, and logistics. The National Development and Reform Commission (NDRC) outlined specific, aggressive penetration goals: by 2027, the penetration rate of intelligent terminals and agents in key sectors is expected to exceed 70%, rising to 90% by 2030.3
This pivot is driven by macroeconomic necessities. China faces a shrinking workforce and a "middle-income trap." "AI Plus" views artificial intelligence as a critical lever to decouple economic growth from labor input. The guidelines call for the mobilization of resources across six priority areas: science and technology, industry, consumption, public welfare, governance, and global cooperation.3 The rhetoric frames AI as a "public good" and a core engine for "new quality productive forces"—a Xi Jinping-era term for growth driven by technological innovation rather than debt or construction.4
2.3 Governance, Standardization, and the "Global South" Strategy
China has moved aggressively to set the rules of the road for the global AI ecosystem, particularly for the developing world. The Global AI Governance Initiative and the 2025 Global AI Governance Action Plan position China as a champion of the "Global South," advocating for equal access to AI development rights and opposing "technological blockades" and "hegemony".5
This is a dual-track strategy. Domestically, the governance regime is strict. Regulations have been established to manage generative AI services, focusing on content security, data privacy, and alignment with socialist core values.7 These regulations require safety assessments and algorithmic filings, ensuring the state retains visibility and control over model behavior.
Internationally, however, China promotes an "Open Source and Open" ecosystem.1 By encouraging its tech giants to release powerful open-source models (like the Qwen and DeepSeek families), China is effectively exporting its standards and platforms to emerging markets in Southeast Asia, the Middle East, and Africa. This serves a strategic purpose: it lowers the barrier to entry for these nations, indebting their digital ecosystems to Chinese architectures rather than American closed-source APIs. It is a form of "soft power" diplomacy executed through code weights and model architectures, aiming to build a "non-Western" AI stack that is resilient to US sanctions.8
3. The Silicon Frontier: Hardware Sovereignty and the Compute War
While China has innovated in policy and software, the physical layer—the chips—remains its most significant vulnerability. The "Chip War" initiated by the United States was designed to freeze China’s AI progress at a specific technological node by restricting access to advanced Graphics Processing Units (GPUs) like Nvidia’s A100 and H100, as well as the manufacturing equipment needed to build them. China’s response has been a forceful, state-subsidized drive for semiconductor self-reliance, centered largely on the national champion, Huawei.
3.1 The Sanctions Regime and the "Chokehold"
The logic of US sanctions was simple: modern AI, particularly Large Language Models (LLMs), requires massive parallel processing power. Nvidia’s GPUs are the industry standard for this task. By cutting off access to these chips and the extreme ultraviolet (EUV) lithography machines (made by ASML) needed to manufacture them, the US aimed to cap China’s compute capacity.
However, the reality of 2025 shows that while the sanctions imposed costs, they failed to induce paralysis. Instead, they forced a consolidation of the Chinese semiconductor industry and a "resource-constrained innovation" mindset. Unable to buy the best hardware, Chinese engineers were forced to optimize what they could build.
3.2 Huawei Ascend: The Domestic Alternative
Huawei’s Ascend (Shengteng) series of AI processors has become the de facto standard for Chinese AI training, filling the vacuum left by Nvidia. The Ascend 910B, and the subsequent Ascend 910C released in late 2024/early 2025, are the workhorses of China’s current AI boom.9
The Ascend 910B was manufactured on a domestic 7nm process (likely by SMIC, the Semiconductor Manufacturing International Corporation). It was designed to replace the Nvidia A100. While technical details are often shrouded in secrecy, benchmarks and teardowns allow for a comparison.
Table 1: Technical Comparison of AI Accelerators (2025 Landscape)
Specification | Huawei Ascend 910B | Nvidia H100 (SXM) | Strategic Implication |
Process Node | 7nm (SMIC N+2) | 4nm (TSMC 4N) | Nvidia retains a transistor density and energy efficiency advantage due to advanced lithography.11 |
Memory (VRAM) | 64GB HBM2e | 80GB HBM3 | The H100 has larger capacity and significantly faster memory bandwidth (3.35 TB/s vs estimated <1.5 TB/s).9 |
Interconnect | HCCS (392 GB/s) | NVLink (900 GB/s) | Nvidia clusters scale more efficiently for massive model training due to higher bandwidth between chips.9 |
Inference Perf. | ~60-80% of H100 | 100% (Baseline) | Ascend is highly viable for inference (running models) but lags in the efficiency of massive training runs.10 |
Reports from 2025 indicate that the Ascend 910C has improved significantly. Researchers from DeepSeek noted that with manual optimizations, the 910C could deliver 60% of the H100's performance in inference tasks 10 and potentially match it in certain training scenarios.12 However, the "lithography wall" remains. SMIC struggles to produce these 7nm chips at high yields without ASML’s EUV machines. This leads to supply constraints; while Nvidia can churn out millions of H100s/Blackwells via TSMC, Huawei’s production is limited by the yield rates of older Deep Ultraviolet (DUV) machines pushed to their physical limits.11
3.3 The Software Battlefield: CANN vs. CUDA
The greatest "moat" Nvidia possesses is not its hardware, but CUDA (Compute Unified Device Architecture)—the software platform that developers have used for two decades. It is a rich ecosystem of libraries, tools, and community support. Huawei’s alternative, CANN (Compute Architecture for Neural Networks), has historically suffered from compatibility issues, bugs, and a lack of optimized libraries.
In a strategic pivot in 2025, Huawei moved to open-source CANN and aggressively fund compatibility layers for PyTorch.13 The goal is to break the "CUDA lock-in" by creating a parallel ecosystem. Chinese tech giants like Baidu and ByteDance have been conscripted into this effort, testing these chips and contributing code. The logic is that if China can make its domestic chips "good enough" to run standard PyTorch code efficiently, the efficacy of US hardware sanctions diminishes significantly. The "CANN vs. CUDA" battle is the software front of the chip war; success here means China can decouple its AI progress from Western software stacks entirely.14
4. Architectural Asymmetry: The Rise of Chinese LLMs
For years, the prevailing narrative in the West was that China would struggle to produce foundational Large Language Models (LLMs) due to censorship constraints and the aforementioned hardware restrictions. The events of late 2024 and 2025 shattered this assumption. A surge of open-source releases from Chinese labs has narrowed the performance gap with US frontier models from years to mere months, driven by architectural innovations born of necessity.15
4.1 The "DeepSeek Moment" and Efficiency-Driven Innovation
The release of DeepSeek-V3 and the reasoning model DeepSeek-R1 in late 2024 and early 2025 represented a paradigm shift. Developed by a relatively small academic-industrial lab (DeepSeek AI), these models challenged the dominance of OpenAI’s GPT-4o and o1 models not just in raw performance, but in architectural efficiency.17
The defining characteristic of the Chinese approach in 2025 is "optimization over abundance." Barred from purchasing massive clusters of high-bandwidth memory (HBM) chips, Chinese researchers were forced to optimize the underlying architecture of their models to run on fewer, less memory-dense chips.
Mixture-of-Experts (MoE) Architecture: DeepSeek-V3 utilizes a massive Mixture-of-Experts architecture with 671 billion total parameters. However, unlike a "dense" model that activates every parameter for every calculation, DeepSeek-V3 activates only 37 billion parameters per token.18 The model uses a router to direct each piece of information (token) to the specific "experts" (sub-networks) best suited to handle it. This sparsity allows for inference speeds and costs that are a fraction of dense models. While MoE is not unique to China (Mistral and Google use it), the scale and efficiency of DeepSeek’s implementation—specifically its "Auxiliary-Loss-Free Load Balancing"—allowed it to achieve state-of-the-art results with significantly lower training costs.18
Multi-Head Latent Attention (MLA):
More critically, DeepSeek introduced Multi-Head Latent Attention. In traditional Transformer models, the "attention" mechanism (which allows the model to understand the relationship between words) requires storing a massive "Key-Value (KV) Cache" in the GPU’s high-bandwidth memory during generation. As context windows grow (to process long documents or books), this cache becomes the bottleneck.
MLA compresses this cache significantly using low-rank matrix compression techniques.18 By projecting the keys and values into a lower-dimensional latent space, DeepSeek-V3 drastically reduces the memory bandwidth required to serve the model. This innovation is a direct response to the hardware sanctions: if one cannot buy GPUs with higher memory bandwidth (like the H100), one must invent architectures that require less bandwidth to function. This allows DeepSeek-V3 to run efficiently on older hardware or the memory-constrained Ascend 910B.
4.2 The Qwen Series and "System 2" Thinking
Alibaba’s Qwen (Tongyi Qianwen) series has emerged as perhaps the most robust all-around competitor to Meta’s Llama and OpenAI’s GPT series. By mid-2025, the Qwen 2.5 and Qwen 3 families were dominating open-source benchmarks.17
Qwen 3 introduced a unified framework for "System 1" (fast, intuitive) and "System 2" (slow, reasoning) thinking. Unlike OpenAI, which separated these into distinct models (GPT-4o vs o1), Qwen 3 integrates a "thinking budget" mechanism. This allows the model to dynamically allocate compute to complex reasoning tasks—spending more time "thinking" before generating an answer—or switch to rapid response modes based on the query complexity.23 Benchmarks indicate that Qwen 2.5-Max and Qwen 3-Coder rival or exceed GPT-4 on coding and mathematical tasks, traditionally the strongholds of US models.17
4.3 Benchmarking the Convergence
The quantitative data supports the qualitative sense of convergence. The Stanford AI Index 2025 noted that while US models held a commanding lead in 2023 (gaps of 17-30% on benchmarks like MATH and HumanEval), by the end of 2024, these margins had collapsed to less than 4%.16
Table 2: Comparative Performance of Frontier Models (Mid-2025 Context)
Benchmark | DeepSeek-R1 (China) | Qwen 2.5-Max (China) | OpenAI o1 (US) | GPT-4o (US) | Analysis |
AIME 2024 (Math) | 79.8% | N/A | 79.2% | N/A | DeepSeek-R1 slightly outperforms o1 in complex math reasoning, showing the efficacy of RL incentives.24 |
Codeforces (Coding) | 96.3% (percentile) | N/A | 96.6% (percentile) | N/A | Statistical tie; o1 retains a slight edge in edge-case handling, but the gap is negligible for practical use.24 |
MMLU (General Knowledge) | 90.8% | N/A | 91.8% | N/A | US models maintain a marginal lead in broad knowledge retrieval, possibly due to larger English training corpora.24 |
Cost (Input/Output) | $0.55 / $2.19 (per M) | N/A | $15.00 / $60.00 | $5.00 / $15.00 | DeepSeek is ~30-50x cheaper, a massive economic advantage derived from its efficient architecture.25 |
4.4 Distillation and the Small Language Model (SLM) Strategy
A crucial second-order effect of models like DeepSeek-R1 is the democratization of reasoning capabilities through distillation. Chinese researchers successfully "distilled" the reasoning patterns of the massive R1 model into smaller, dense models (e.g., DeepSeek-R1-Distill-Qwen-32B). These smaller models, which can run on consumer-grade hardware or edge devices, achieved performance on math benchmarks (AIME 2024) comparable to the massive OpenAI o1-mini.26
This focus on SLMs is strategic. It allows China to deploy high-intelligence AI into smartphones, vehicles, and factory robots—areas where China has a massive manufacturing install base—without requiring the prohibitively expensive cloud compute of trillion-parameter models. It effectively bypasses the need for massive GPU clusters for inference, shifting the computational load to the edge.28
5. Infrastructure: The "East Data West Computing" Megaproject
Recognizing that it cannot rely solely on chip efficiency, China is attempting to solve the AI equation through energy and scale. The "East Data West Computing" (EDWC) initiative is a massive geo-economic rebalancing act, moving data centers from the energy-hungry eastern coast to the resource-rich western interior.8 This project elevates computing power to the status of a utility—like water or electricity—to be generated where resources are cheap and transmitted to where demand is high.
5.1 The Logic of the Layout
The geography of China dictates this strategy. The eastern provinces (Beijing, Shanghai, Guangdong) generate the vast majority of data and economic activity but suffer from land scarcity and high electricity costs. The western provinces (Guizhou, Inner Mongolia, Gansu) have abundant land, cool climates (reducing cooling costs), and massive surpluses of renewable energy (wind, solar, hydro).
The EDWC project, launched in 2022 and accelerating through 2025, designated eight national computing hubs. By late 2025, this resulted in the activation of the Future Network Test Facility (FNTF), a 1,243-mile distributed AI computing pool. This network claims to achieve 98% of the efficiency of a single data center despite the geographic spread, utilizing breakthroughs in long-distance RDMA (Remote Direct Memory Access) and optical networking to minimize latency.30
5.2 Hub Profile: Guizhou (The Digital Valley)
Guizhou province, located in the mountainous southwest, has leveraged its unique karst geology and hydropower resources to become a global data sanctuary. The region’s many natural caves and tunnels provide secure, naturally cool environments for servers, while its hydroelectric dams provide stable, green power.
By 2025, Guizhou’s computing power scale topped 150 EFLOPS (quintillion floating-point operations per second), with "intelligent computing" (AI-specific compute) accounting for over 90% of this total.31 The Guizhou Cloud Big Data Industry Park and similar facilities host data for domestic giants like Tencent and Huawei, as well as iCloud data for Apple’s Chinese users. The sheer scale of investment is staggering: fixed-asset investments in computing facilities exceeded 22 billion yuan ($3.14 billion) in 2025 alone.31 The province aims for 190 EFLOPS by 2026, positioning itself as the engine room for the country’s "AI Plus" ambitions.31
5.3 Hub Profile: Inner Mongolia (The Green Engine)
In the north, the China Telecom Cloud Computing Inner Mongolia Information Park in Hohhot stands as the largest internet data center in Asia.32 Spanning 100 hectares and consuming up to 150 MW of power, it utilizes the region’s frigid air for "free cooling" up to 10 months a year, drastically lowering the Power Usage Effectiveness (PUE) to world-leading levels.32
These hubs are not just storage lockers; they are training grounds. By co-locating AI training clusters with gigawatt-scale wind and solar farms, China aims to insulate its AI industry from global energy price volatility and carbon regulations. The strategic division of labor envisions these western clusters handling "background" processing—model training and cold data storage—while eastern hubs handle "real-time" inference requiring low latency.29
6. Economic Integration: Implementing "AI Plus"
The ultimate measure of China’s AI dominance will not be benchmark scores, but economic impact. The "AI Plus" initiative seeks to weave intelligence into the fabric of the nation's industrial base, creating a feedback loop where industrial data improves models, and models improve industrial efficiency.
6.1 Intelligent Manufacturing: The Smart Factory
Huawei’s own production lines serve as the prototype for this vision. By integrating AI into quality control and supply chain management, Chinese manufacturers aim to move up the value chain from assembly to advanced bio-manufacturing and precision engineering.4 The "AI Plus" guidelines encourage the creation of "smart factories" where AI agents monitor production lines, predict equipment failures, and optimize energy usage in real-time. This is critical for China to maintain its status as the "World's Factory" amidst rising labor costs.4
For example, manufacturers are using computer vision models (distilled from larger foundation models) to inspect components at speeds human eyes cannot match. These systems are running on the edge, powered by the Ascend chips discussed earlier, creating a domestic closed-loop ecosystem.
6.2 Healthcare: Democratizing Diagnostics
With a rapidly aging population, China faces a looming healthcare crisis. AI is viewed as a necessary force multiplier to address the shortage of qualified doctors. Tencent Miying, an AI medical imaging platform, exemplifies this application. It uses deep learning to analyze CT scans, MRIs, and endoscopies for early signs of esophageal cancer, glaucoma, and lung nodules.34
In 2025, implementations of such systems expanded to grassroots clinics in provinces like Henan. This allows village doctors, who may lack specialized training, to upload scans to the cloud (hosted in Guizhou) and receive diagnostic suggestions from an AI trained on millions of cases from top-tier urban hospitals.36 The integration of "AI + Medical" is not just a commercial opportunity but a social stability imperative, ensuring that the shrinking working-age population can support the growing elderly demographic.
6.3 Economic Impact Forecasts
Economists are analyzing the potential magnitude of AI’s impact on China’s GDP with keen interest. Goldman Sachs estimates that AI adoption in China could exceed 30% by 2030, potentially boosting GDP significantly.37 They note a structural difference: China’s labor market—heavily skewed towards physical manufacturing and agriculture—is less immediately susceptible to generative AI automation (which primarily affects knowledge work) than the service-heavy US economy.
However, the "DeepSeek breakthrough" and the rapid deployment of coding assistants (like Qwen-Coder) may accelerate this timeline. McKinsey’s 2025 analysis highlights that while many Chinese firms are still in the pilot phase, "high performers" are already using AI to drive 20% efficiency gains, suggesting a widening gap between AI-native firms and traditional SOEs.38 The report suggests that if "AI Plus" succeeds, it could add percentage points to China's annual GDP growth, acting as a counterweight to demographic headwinds.
7. The Human Element: Talent Wars and Research
Hardware can be bought (or smuggled), and data can be generated, but talent must be cultivated. The "Brain Drain" has been a persistent anxiety for Beijing, but the tides are shifting.
7.1 The Retention Reality: Brain Drain vs. Returnees
Data from MacroPolo’s Global AI Talent Tracker (updated 2025) paints a nuanced picture of the talent landscape. Historically, the US has been the primary beneficiary of Chinese talent; in 2019, the vast majority of Chinese researchers who completed PhDs in the US stayed there to work for American tech giants.
However, the 2025 update shows a shifting tide. While the US still retains approximately 87% of top-tier Chinese AI researchers educated there, the "flow" of new talent is changing.39 Increasingly, top researchers are choosing to stay in China or return after brief stints abroad. This trend is driven by a mix of factors: "glass ceiling" concerns in the West due to geopolitical tensions, rising nationalism, and the sheer abundance of data and funding available in the domestic ecosystem.
The rise of labs like DeepSeek, 01.AI, and the Beijing Academy of Artificial Intelligence (BAAI)—which offer Silicon Valley-style compensation, computational resources, and academic freedom—provides a landing pad for elite talent that didn't exist five years ago. These labs are becoming magnets for "sea turtles" (returnees) who bring back tacit knowledge of Western research cultures.
7.2 Research Output and Quality
China has long led the world in the volume of AI patents (holding 61.1% of global AI patent origins in 2022).41 However, the historical criticism was "quality over quantity." The 2025 data suggests this quality gap is closing. The number of "notable AI models" originating from China has risen sharply. While the US still leads in the total number of foundation models (40 vs. 15 in 2024), Chinese research is dominating specific niches like person re-identification, video analysis, and increasingly, efficient reasoning architectures.42
8. Global Rankings and Current Status
Assessing "dominance" is subjective and depends on the metrics utilized. As of early 2026, the global rankings present a split verdict, reflecting the different strengths of the US and Chinese ecosystems.
8.1 The Stanford AI Index 2025
The Stanford Index identifies the United States as the clear leader in private investment ($109 billion vs. China’s ~$9 billion in 2024) and the generation of notable models.43 The investment gap is stark—nearly 12x. This suggests that while China’s state investment is massive (and often opaque or uncounted in private market data), the commercial capital dynamism in the US remains superior. However, the Index explicitly notes the "closing of the gap" in model performance, declaring that the era of significant US model superiority has ended.16
8.2 The Tortoise Global AI Index
In the Tortoise Global AI Index, the US ranks #1 and China #2. China scores exceptionally high in "Development" and "Government Strategy," reflecting the state’s heavy hand in guiding the sector.44 However, it lags in "Talent" (relative to the US density of top researchers) and "Operating Environment" (due to regulatory constraints and censorship).44 Interestingly, the index notes that smaller nations like Singapore are outperforming China in "intensity" (per capita adoption), but China’s sheer scale keeps it firmly in the second spot globally.45
8.3 The Verdict: Sovereignty over Dominance
China is currently the undisputed runner-up with a high velocity of convergence. It has not yet achieved "dominance" globally—the US still leads in capital, top-tier talent density, and hardware origination. However, China has achieved technological sovereignty: the ability to maintain a state-of-the-art AI ecosystem without reliance on Western software or models, even if it still relies on stockpiled or smuggled Western hardware while its domestic fabs catch up.
9. Conclusion: The Path to 2030
As China enters the second half of the decade, its plan to achieve AI dominance faces a "critical window." The strategy has successfully evolved from imitation to architectural innovation (DeepSeek/Qwen) and infrastructural brute force (East Data West Computing). The "AI Plus" initiative is beginning to rewire the massive industrial base, potentially unlocking productivity gains that could offset demographic decline.
However, three critical bottlenecks remain:
The Lithography Wall: Without access to EUV lithography, SMIC’s ability to mass-produce chips like the Ascend 910C is capped. Architectural efficiency (MoE, MLA) can compensate for a generation or two of hardware lag, but as physics limits are reached, the lack of advanced manufacturing may become a hard ceiling.
The Investment Gap: The disparity in private capital funding ($109B US vs $9B China) is unsustainable. China relies on state guidance funds, which are powerful but historically less efficient at allocating capital to disruptive innovation than private venture capital markets.
The Talent Pipeline: While retaining more talent than before, China still loses a significant portion of its absolute "best and brightest" to US institutions.
Despite these challenges, the evidence from 2025 indicates that China is on track to meet its 2030 goal of being the "primary center" for AI innovation in specific domains—particularly industrial AI, surveillance, and efficient edge deployment. The US may retain the edge in the "frontier" of Artificial General Intelligence (AGI) research and massive capital deployment, but China is building an AI ecosystem that is robust, self-sufficient, and deeply integrated into the physical economy. The race is no longer about who has the smartest chatbot, but who can most effectively translate machine intelligence into national power.
Works cited
China AI Strategy - Digital Trade and Data Governance Hub, accessed January 14, 2026, https://datagovhub.elliott.gwu.edu/china-ai-strategy/
Full Translation: China's 'New Generation Artificial Intelligence Development Plan' (2017), accessed January 14, 2026, https://digichina.stanford.edu/work/full-translation-chinas-new-generation-artificial-intelligence-development-plan-2017/
China aims for AI application breakthroughs in key sectors in next 2 years: official, accessed January 14, 2026, https://english.www.gov.cn/news/202508/29/content_WS68b18f2cc6d0868f4e8f528d.html
“Intelligent everything”: China's policy to supercharge AI adoption, accessed January 14, 2026, https://www.ussc.edu.au/intelligent-everything-china-s-policy-to-supercharge-ai-adoption
From innovation to deployment: How China is reshaping the future of AI governance, accessed January 14, 2026, https://policyreview.info/articles/news/china-reshaping-future-ai-governance/2041
The AI Action Plans: How Similar are the U.S. and Chinese Playbooks? - Just Security, accessed January 14, 2026, https://www.justsecurity.org/119509/us-chinese-ai-playbooks/
Next Generation Artificial Intelligence Development Plan, accessed January 14, 2026, https://fi.china-embassy.gov.cn/eng/kxjs/201710/P020210628714286134479.pdf
More Than Meets the AI: China's Data Centre Strategy, accessed January 14, 2026, https://icds.ee/en/more-than-meets-the-ai-chinas-data-centre-strategy/
GPU Performance (Data Sheets) Quick Reference (2023) - ArthurChiao's Blog, accessed January 14, 2026, https://arthurchiao.art/blog/gpu-data-sheets/
DeepSeek research suggests Huawei's Ascend 910C delivers 60% of Nvidia H100 inference performance | Tom's Hardware, accessed January 14, 2026, https://www.tomshardware.com/tech-industry/artificial-intelligence/deepseek-research-suggests-huaweis-ascend-910c-delivers-60-percent-nvidia-h100-inference-performance
Huawei says its AI chip better than Nvidia's A100 amid China's self-reliance drive | SemiWiki, accessed January 14, 2026, https://semiwiki.com/forum/threads/huawei-says-its-ai-chip-better-than-nvidias-a100-amid-chinas-self-reliance-drive.20381/
huawei's ascend 910c chip matches nvidia's h100. there will be 1.4 million of them by december. don't think banned countries and open source can't reach agi first. : r/OpenAI - Reddit, accessed January 14, 2026, https://www.reddit.com/r/OpenAI/comments/1ihebb4/huaweis_ascend_910c_chip_matches_nvidias_h100/
Huawei CANN vs CUDA: Can Open-Source AI Break NVIDIA's Grip?, accessed January 14, 2026, https://tech-now.io/en/blogs/huawei-cann-vs-cuda-can-open-source-ai-break-nvidias-grip
Can Huawei's open-sourced CANN toolkit break the CUDA monopoly? - AI News, accessed January 14, 2026, https://www.artificialintelligence-news.com/news/huawei-nvidia-cann-cuda-open-source-challenge/
State of AI: China - Artificial Analysis, accessed January 14, 2026, https://artificialanalysis.ai/downloads/china-report/2025/Artificial-Analysis-State-of-AI-China-Q2-2025-Highlights.pdf
Artificial Intelligence Index Report 2025 | Stanford HAI, accessed January 14, 2026, https://hai.stanford.edu/assets/files/hai_ai_index_report_2025.pdf
An Overview of Chinese Open-Source LLMs (Sept 2025) - IntuitionLabs, accessed January 14, 2026, https://intuitionlabs.ai/articles/chinese-open-source-llms-2025
[2412.19437] DeepSeek-V3 Technical Report - arXiv, accessed January 14, 2026, https://arxiv.org/abs/2412.19437
Top 6 Chinese AI Models Like DeepSeek (LLMs) You Should Know in 2026 - Index.dev, accessed January 14, 2026, https://www.index.dev/blog/chinese-ai-models-deepseek
DeepSeek-V3 Technical Report - arXiv, accessed January 14, 2026, https://arxiv.org/html/2412.19437v1
DeepSeek-V3 Explained 1: Multi-head Latent Attention | Towards Data Science, accessed January 14, 2026, https://towardsdatascience.com/deepseek-v3-explained-1-multi-head-latent-attention-ed6bee2a67c4/
arXiv:2412.15115v2 [cs.CL] 3 Jan 2025, accessed January 14, 2026, https://arxiv.org/pdf/2412.15115
arXiv:2505.09388v1 [cs.CL] 14 May 2025, accessed January 14, 2026, https://arxiv.org/pdf/2505.09388
DeepSeek R1 vs OpenAI O1: AI Model Comparison (2025) - Zignuts Technolab, accessed January 14, 2026, https://www.zignuts.com/blog/deepseek-r1-vs-openai-o1-comparison
The Chinese OBLITERATED OpenAI. A side-by-side comparison of DeepSeek R1 vs OpenAI O1 for Finance : r/ChatGPTPromptGenius - Reddit, accessed January 14, 2026, https://www.reddit.com/r/ChatGPTPromptGenius/comments/1i6joqt/the_chinese_obliterated_openai_a_sidebyside/
DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning, accessed January 14, 2026, https://arxiv.org/html/2501.12948v1
arXiv:2503.04872v2 [cs.CL] 17 Mar 2025, accessed January 14, 2026, https://arxiv.org/pdf/2503.04872?
deepseek-ai/DeepSeek-R1 - Hugging Face, accessed January 14, 2026, https://huggingface.co/deepseek-ai/DeepSeek-R1
Oceans of data lift all boats: China's data centers move west | Merics, accessed January 14, 2026, https://merics.org/en/comment/oceans-data-lift-all-boats-chinas-data-centers-move-west
China Activates 1243-Mile AI Computing Network: 98% Efficiency at Continental Scale, accessed January 14, 2026, https://introl.com/blog/china-fntf-distributed-ai-computing-1243-miles-january-2026
China's "digital valley" reports computing power surge in 2025 - People's Daily Online, accessed January 14, 2026, http://en.people.cn/n3/2026/0114/c90000-20413825.html
China Telecom Cloud-computing Inner Mongolia Information Park Project - FIDIC, accessed January 14, 2026, https://fidic.org/sites/default/files/9-China%20Telecom%20Cloud-computing%20Inner%20Mongolia%20Information%20Park%20Project.pdf
Top 10 Energy-Consuming Data Centers - Sunbird DCIM, accessed January 14, 2026, https://www.sunbirddcim.com/infographic/top-10-energy-consuming-data-centers
Harnessing the Power of AI for Enhanced Diagnosis and Treatment of Hepatocellular Carcinoma - PMC - PubMed Central, accessed January 14, 2026, https://pmc.ncbi.nlm.nih.gov/articles/PMC12001482/
Building chain of medical AI innovations, Tencent establishes national new-generation medical imaging AI platform - Editorial Office, accessed January 14, 2026, https://jmai.amegroups.org/article/view/4357/html
Agenda. - Asia Pacific Medical Technology Association (APACMed), accessed January 14, 2026, https://apacmed.org/wp-content/uploads/2025/02/3-April-2025-China-DH-and-AI-webinar.pdf
What advanced AI means for China's economic outlook - Goldman Sachs, accessed January 14, 2026, https://www.goldmansachs.com/insights/articles/what-advanced-ai-means-for-chinas-economic-outlook
The state of AI in 2025: Agents, innovation, and transformation - McKinsey, accessed January 14, 2026, https://www.mckinsey.com/capabilities/quantumblack/our-insights/the-state-of-ai
Have Top Chinese AI Researchers Stayed in the United States?, accessed January 14, 2026, https://carnegieendowment.org/emissary/2025/12/china-ai-researchers-us-talent-pool?lang=en
US Retains 87% of Top Chinese AI Talent, But the Pipeline is Leaking - SunTzu Recruit, accessed January 14, 2026, https://www.suntzurecruit.com/2025/12/14/us-retains-87-of-top-chinese-ai-talent-but-the-pipeline-is-leaking/
Artificial Intelligence Index Report 2024 - AWS, accessed January 14, 2026, https://hai-production.s3.amazonaws.com/files/hai_ai-index-report-2024-smaller2.pdf
Research and Development | The 2025 AI Index Report | Stanford HAI, accessed January 14, 2026, https://hai.stanford.edu/ai-index/2025-ai-index-report/research-and-development
Economy | The 2025 AI Index Report | Stanford HAI, accessed January 14, 2026, https://hai.stanford.edu/ai-index/2025-ai-index-report/economy
EU and global AI race: an evaluation - EU Tech Loop, accessed January 14, 2026, https://eutechloop.com/eu-global-ai-race-evaluation/
FF2025-01-Global-AI-Index.pdf - CPBRD, accessed January 14, 2026, https://cpbrd.congress.gov.ph/wp-content/uploads/2025/02/FF2025-01-Global-AI-Index.pdf



Comments