Anthropic Report: The Battle for AI Dominance in 2028, If the U.S. Doesn't Maintain Its Computing Power Advantage, It May Be Surpassed by China

Anthropic’s latest report indicates that the United States still holds an advantage in AI computing power, but China is rapidly closing the gap by exploiting regulatory loopholes and model distillation. If export controls are not tightened and distillation is not curbed, China could surpass the US in some areas by 2028, leading to a contest over global AI governance rules.
(Background: The White House plans to sign an executive order to ban Anthropic, potentially fully removing Claude this week)
(Additional context: Anthropic has sued the US Department of Defense! Demanding the reversal of the Claude ban: refusing to be tools for AI killing machines)

This article is compiled from analysis on anthropic.com.

In its latest policy research report, Anthropic warns that AI competition is shifting from model performance battles to systemic rivalry. The report states that the US and its allies still hold significant advantages in compute power, but China is rapidly closing the gap through loopholes in export controls and model distillation. If Washington does not block chip smuggling, overseas data center access, and distillation attacks now, by 2028, China could be competing with or even overtaking the US in AI—this is not only about technological leadership but will also determine who sets the future rules and governance frameworks for AI.

Anthropic has published a new paper outlining its views on the US-China AI competition.

The US and its allies need to maintain a relative lead over major competitors like China in AI. As AI performance rapidly advances, this technology will soon deeply influence social governance, national security, and international power dynamics. Meanwhile, the pace of AI development is accelerating, leaving limited time for setting rules, managing risks, and shaping global governance. Against this backdrop, Anthropic describes the measures necessary for the US to stay ahead.

One of the most critical elements in developing AI is access to computational chips used for training models, i.e., “compute power.” Since the most advanced chips are primarily developed by US and allied companies, the US government currently restricts China’s access through export controls. Recent experience shows these measures are already having noticeable effects. In fact, Chinese AI labs are able to develop models close to US levels mainly due to talent advantages, exploiting loopholes in export restrictions, and large-scale model distillation—extracting outputs and efficiencies from US models to rapidly replicate certain technical achievements.

This article depicts two possible scenarios for the world in 2028. Anthropic expects that by then, transformative AI systems will have already emerged.

Scenario 1: The US successfully maintains its compute advantage. Policymakers further tighten export controls, reducing China’s ability to acquire US frontier capabilities via distillation, and accelerate AI adoption within the US and allies. In this world, a US-led AI ecosystem can more effectively influence rules, standards, and governance. Under this scenario, the US is more likely to engage in meaningful communication with China on AI safeguards; Anthropic supports this approach within feasible bounds.

Scenario 2: The US fails to act sufficiently. Policymakers do not block China’s access to advanced compute, allowing Chinese AI firms to quickly close the gap and even surpass the US in some domains. In this world, AI rules and standards will be contested by more countries, and the most advanced models could be used for large-scale social governance, cyber operations, and defense. Even if this scenario is built on the US’s compute and technological spillovers, it would not serve the long-term interests of the US and its allies.

The US and its allies entered the AI race with a significant advantage. The core tools for AI dominance are built within a highly innovative ecosystem of US and allied companies. Past successes suggest that the most urgent task now is to prevent losing this advantage—making it easier for China to catch up.

AI development and deployment will shape future global technology rules, industry standards, and governance frameworks. Those who lead in AI are more likely to influence how these systems are implemented.

Currently, the US and its allies hold a substantial lead in compute power, which is one of the most important factors in developing cutting-edge AI models. This lead stems from US and allied technological innovation and bipartisan support for export controls. However, in terms of model intelligence, Chinese labs are already not far behind. Anthropic’s concern about China’s AI progress is not to deny the achievements of Chinese people and AI communities, but because China is the only country besides the US with sufficient resources and top talent systematically approaching the frontier of AI.

China has already applied AI in information censorship, social governance, cyber defense, and military capabilities. Chinese labs possess world-class talent. The main constraint limiting their progress is compute power. Their ability to stay close to the frontier partly relies on exploiting loopholes in US export policies and large-scale distillation to extract efficiencies from US models, accelerating their own training and performance.

As supply of compute power expands rapidly, AI is increasingly used to enhance training of new models, and Anthropic is entering a period of rapid AI performance acceleration. The so-called “Genius in the Data Center”—the transformative level of AI intelligence—may be within reach. This acceleration makes policy action even more urgent.

So far, Chinese AI systems have continued to approach the frontier due to export control evasion and distillation. But if the US and allies act now to address both access to compute and model spillover issues, they could potentially lock in a 12-24 month lead in frontier performance. By 2028, such a lead would have significant strategic importance. It would also strengthen US-China AI dialogue on safeguards and governance, which Anthropic supports. However, the window to secure this advantage will not remain open indefinitely.

The compute advantage window is only 12-24 months

Here, Anthropic presents two possible scenarios for the US-China AI competition in 2028.
Scenario 1: The US and allies establish a significant lead in model intelligence, application adoption, and global distribution. If policymakers tighten export controls now, reducing China’s ability to distill US models, and accelerate AI adoption domestically and internationally, this scenario could materialize.

Scenario 2: China remains competitive near the frontier. If policymakers do not push forward on the current lead, or relax restrictions on Chinese access to advanced compute, this scenario will unfold.

Many in Congress and the Trump administration already support export controls, efforts to curb distillation attacks, and promoting US AI technology exports. With these policies advancing, Anthropic hopes the US and allies can secure a substantial lead before 2028, avoiding a close race with China two years later.

Anthropic expects frontier AI to have profound economic and social impacts in the coming years, as described in Machines of Loving Grace and The Adolescence of Technology. Its mission is to ensure humans can navigate and benefit from this transition. Success in this transition could lead to breakthroughs in medicine, invention, and economic growth.

Whether this transition proceeds smoothly depends partly on which systems are built first. The industry ecosystem, regulatory environment, and governance frameworks surrounding the most advanced AI will shape the rules for development and deployment. These rules, in turn, will influence whether AI is used for protection, whom it protects, and which interests it ultimately serves.

If the AI frontier is primarily shaped by systems used for military advantage, cyber operations, social control, and information dominance, the technological transformation will face higher uncertainty and security risks.

Historically, large-scale governance and surveillance have been limited by human labor costs. Powerful AI could reduce these costs, enabling automated governance, detection, and decision-making at larger scales. Therefore, China’s lead in AI could significantly influence global governance and security architectures.

China controls vast economic, military, and governance resources. It is the only other country besides the US with resource-rich, top-tier AI labs, and is approaching the frontier. Beijing has invested hundreds of billions of dollars into AI and semiconductor industries, aiming to become a leading AI superpower.

China has applied AI in information censorship, social governance, cyber defense, and security. Deployment of facial recognition, biometric data collection, and communication monitoring in some regions demonstrates AI’s potential for large-scale governance. Frontier AI systems will lower maintenance costs, expand coverage, and increase automation. As these technologies spread overseas, more countries may use AI to strengthen governance and surveillance, potentially transforming global norms and practices.

Frontier AI will shape future military power balances. China views AI as a key variable for future warfare and is advancing military intelligence systems. Its strategists see “smartening” military forces as a crucial path to enhance effectiveness. The military has begun deploying commercial AI systems, including models like DeepSeek for coordinating drone swarms and enhancing cyber operations.

These capabilities will not spread slowly. When a new model reaches new levels of autonomous targeting, vulnerability detection, or swarm coordination, the party controlling it can deploy it within weeks, not years.

Risks compound as frontier AI accelerates other core technologies. Advanced models can compress R&D cycles in semiconductors, biotech, and materials science. Leading in AI will continually expand a country’s advantage across national defense tech stacks.

If a Chinese lab develops a model comparable to Claude Mythos Preview before US labs, China will gain a system capable of autonomously discovering and exploiting software vulnerabilities, further boosting cyber capabilities. Future models will improve exponentially, posing greater risks to US and other nations’ security interests.

China rapidly copying US AI via distillation

The race between US and Chinese AI labs could make industry and government efforts to safeguard and govern more difficult. If China’s labs follow US models closely or reach parity, both US and Chinese private AI firms may feel increased pressure to release new models and products faster, before full safeguards are in place. Governments might also hesitate to implement responsible AI policies, fearing falling behind.

While more researchers in China are now concerned about AI safety and safeguards, this trend has not yet translated into comparable protective practices. As of last year, only 3 of China’s top 13 AI labs published safety assessments, none disclosed risks related to chemical, biological, radiological, or nuclear threats (CBRN). The Center for AI Safety and Innovation (CAISI) found that under common jailbreak techniques, DeepSeek’s R1-0528 model responds to 94% of clearly malicious requests, compared to 8% for US reference models. This pattern persists in recent models; for example, a April evaluation of Moonshot’s Kimi K2.5 found it less likely to reject CBRN-related requests than US frontier models.

More seriously, Chinese labs often release models with open weights that have dual-use military and civilian capabilities. Once weights are open, existing safeguards can be bypassed, allowing any actor—state or non-state—to use these models maliciously, including cyberattacks and CBRN misuse. These safeguards were originally designed to prevent such abuses.

Anthropic supports US and allied policies to establish and maintain safeguards relative to China, in terms of model intelligence, domestic adoption, and global distribution. This lead is crucial for protecting US and allied national interests and preventing misuse of AI technology. It is also a prerequisite for maintaining a favorable position in future global AI governance.

Anthropic deeply respects the Chinese people and the achievements of China’s AI community. It hopes for peaceful relations between China and the world. Its concern is about the risks posed by any powerful nation gaining access to frontier AI systems, which could threaten global safeguards and governance.

Whenever feasible, Anthropic supports international dialogue with Chinese AI experts on AI safeguards. Regardless of where AI is developed and deployed, the world shares common interests in safeguarding AI. Addressing the risks posed by frontier AI requires communication between the US and China. Identifying shared challenges and promoting ideas to prepare for and mitigate these risks benefits both sides.

When the US maintains a substantial lead, constructive engagement is most promising. Responsible leadership in developing and deploying advanced AI will enhance US influence over China and other regions’ AI safeguard practices.

Mythos Preview, released by Anthropic in April as part of Project Glasswing, signals that a phase of performance acceleration has arrived, making policy action even more urgent. After gaining access, the number of safeguard vulnerabilities patched by Firefox last month exceeded the total patched throughout 2025—almost 20 times the monthly average of 2025. An analyst in China described the situation as: “They’re sharpening their knives, while the other side suddenly raises a fully automatic Gatling gun.”

This acceleration toward “Genius in the Data Center”—the transformative AI intelligence level Anthropic envisions—brings the risk of rapid near-frontier breakthroughs. This makes policy responses more urgent.

Such progress will bring AI performance close to the “Genius in the Data Center” scenario. This acceleration is driven by the law of diminishing returns: as compute and data input increase, model performance improves predictably; simultaneously, AI is increasingly used to accelerate new model development.

Anthropic likely sees 2026 as the window for the US to achieve a breakthrough lead in AI. US labs possess the most advanced models and a significant lead in the quantity and quality of AI chips needed for frontier research, backed by substantial capital from revenue and investments. Chinese labs have real advantages—world-class talent, abundant and cheap energy, and large datasets—but lack sufficient domestic compute capacity and funding to compete at the same level.

The US and China are engaged in a strategic contest over frontier AI. Public statements from Beijing and Washington reflect this view. Labeling it a “race” might be misleading: it’s more an ongoing competition for advantage. The ultimate outcome—whether democratic or non-democratic countries shape the AI era’s values, rules, and norms—depends on the long-term trajectory of this contest.

This competition unfolds along four fronts:

  • Performance: Which countries develop the most powerful AI models.
  • Domestic adoption: Which countries most effectively integrate AI into business and government.
  • Global distribution: Which nations build the AI infrastructure supporting the global economy.
  • Resilience: Which countries maintain political stability during economic transitions.

Two scenarios for 2028: US-led or parallel competition

Among these, performance is the most critical. Anthropic predicts that frontier model performance will have the deepest geopolitical impact. It is also the core driver of market adoption and global distribution.

But performance alone is not enough. If China can more quickly and effectively integrate near-frontier AI systems into its economy and safeguards, and promote low-cost, subsidized AI globally, it could offset any performance gap. Beijing’s “AI+” initiatives and focus on “embodied intelligence” reflect a policy priority to embed frontier intelligence into economic and national systems. The Trump administration’s AI strategy, emphasizing “exporting US AI technology stacks,” similarly aims for strategic advantages through global adoption.

While this article does not focus on “resilience,” Anthropic considers it an important aspect of AI competition. Maintaining stability, cohesion, and good policy-making during this period will be a key advantage; conversely, failure to do so creates vulnerabilities.

Compute—advanced semiconductors needed for training and deploying frontier AI—is a core input across all four fronts. The global race for AI leadership is largely a race for compute. Over the past decade, model performance has improved mainly through scaling compute. Most of AI’s historical performance gains come from larger-scale compute use.

Beyond training, compute is also vital for inference—supporting user interaction with AI. Whether training the smartest models or deploying them commercially and militarily, compute is essential. Top talent, vast data, and breakthroughs in algorithms are also crucial, but without sufficient compute, these investments cannot fully realize their potential.

Today, democratic countries are leading in compute. Concerns exist that export controls might accelerate China’s efforts to develop domestic advanced chips, but there is little evidence that China’s self-sufficiency efforts challenge US and allied dominance in advanced compute. Before export controls, Beijing had already invested heavily in China’s chip industry and launched major policies like “Made in China 2025” and the National Integrated Circuit Industry Investment Fund. Despite these efforts, Chinese labs and chipmakers remain constrained by US and allied export restrictions on advanced chips and manufacturing equipment.

As a result, the compute gap appears to be widening. An analysis of Huawei and NVIDIA’s product roadmaps shows Huawei’s total processing capacity in 2026 will be only about 4% of NVIDIA’s, dropping to 2% in 2027. More importantly, NVIDIA is just part of the US-led ecosystem; Google and Amazon are also rapidly developing their own chips (TPU, Trainium) to meet the needs of US frontier labs and customers.

Further constraining China’s compute capacity are limited progress in the most complex segments of the semiconductor supply chain. Without access to EUV lithography—especially as policies tighten on DUV lithography—Chinese manufacturers will struggle to produce chips at the necessary scale and quality to challenge US dominance. China also cannot mass-produce high-bandwidth memory, further widening the gap. A study estimates that if US restrictions on China’s access to US compute capabilities tighten, the US could have roughly 11 times more compute than China’s AI industry.

The lead in compute mainly stems from two factors:

  1. Continuous innovation by US, Japanese, Korean, Taiwanese, and Dutch companies like NVIDIA, AMD, Micron, TSMC, Samsung, and ASML. These firms have built the unique technologies needed for the world’s most advanced semiconductors. Without decades of R&D and engineering breakthroughs, today’s AI achievements would not be possible.

  2. Forward-looking policies by the US government over the past three administrations. Bipartisan efforts to restrict sales of advanced AI chips and manufacturing equipment to China have protected US and allied innovation engines. Anthropic’s CEO has publicly emphasized the importance of export controls. These restrictions have limited the sale of top-tier chips and manufacturing tools to China, constraining its AI development despite heavy national investment. Without such measures, China might have the full conditions to develop AI models comparable or superior to US systems.

Some observers worry that restricting access to compute might push China’s labs to innovate elsewhere, weakening US dominance. While Chinese labs are indeed innovating, so far these efforts have not fully closed the compute gap. Algorithm improvements are both a function of compute and a multiplier of it, not a substitute. Discovering new algorithms heavily depends on compute: more compute allows more experiments, leading to more breakthroughs. As frontier models participate more in AI R&D, this feedback loop will intensify, with models helping to build their own successors. In short, compute advantage will further translate into algorithmic advantage, ultimately securing a lasting lead in AI.

Currently, US frontier systems are estimated to be at least several months ahead of China’s top models in intelligence, though such estimates are inherently uncertain. While open-weight models from China have garnered attention, they lag behind closed-source frontier models in enterprise adoption, and market investors are increasingly concerned about their commercialization prospects. Chinese labs seem to be shifting away from open-source toward keeping their best models private.

Exploiting loopholes: chip smuggling and overseas data centers

Chinese AI leaders acknowledge the impact of export controls and the core need for US chips. Top Chinese AI executives express concern that compute constraints will slow progress. They cite chip scarcity as a primary bottleneck and see export restrictions as a key cause. A senior executive at a major Chinese cloud provider said that importing US chips subject to export controls would have a “huge, really huge” impact, and that any supply gap would severely hinder China’s AI development; he also dismissed fears that importing US chips would slow China’s self-sufficiency efforts. The main voices claiming “export controls are ineffective” are mostly official statements and state media, possibly aimed at influencing US policymakers.

While export controls are effective in shaping current advantages, their scope remains insufficient. China cannot produce enough advanced chips domestically nor legally purchase them abroad, but Chinese labs still use two main workarounds to stay near the frontier:

  1. Smuggling compute hardware into China or accessing overseas data centers.
  2. Illicit model access—distillation attacks on US frontier models, using the outputs as tools to accelerate their own AI R&D.

China’s evasion of US export restrictions is well known. For example, US prosecutors accused a co-founder of Supermicro and two others of transferring servers worth $2.5 billion containing advanced US chips to China. Reports indicate DeepSeek trained its latest models using US chips banned from export to China. The Financial Times reports Alibaba and ByteDance are training flagship models in Southeast Asian data centers with US-restricted chips. Current restrictions mainly target chip sales, not remote access; thus, US export controls struggle to block overseas access to restricted chips. The US system is struggling to address China’s efforts to acquire advanced compute.

Distillation attacks are another method used to approach US models and weaken export controls. Chinese labs set up numerous fake accounts to bypass access controls on US models, systematically collecting outputs to replicate frontier performance. This allows them to piggyback on decades of US research, billions of dollars invested, and top engineering talent worldwide. The result: China can obtain near-frontier performance at minimal cost, effectively subsidized by US investments. From a long-term national security perspective, this amounts to systematic industrial espionage on core technologies. OpenAI, Google, Anthropic, and the Frontier Model Forum have publicly condemned distillation attacks.

Chinese AI experts openly acknowledge the scale and importance of distillation. A recent state media article described distillation attacks on US models as a “backdoor” China’s labs rely on, calling it a core part of their business model. A former ByteDance researcher said Chinese labs use distillation as a shortcut to train models, avoiding the need to build their own data pipelines.

US policymakers have responded swiftly. The White House Office of Science and Technology Policy issued a memo on distillation attacks. Senior officials in the White House, Pentagon, and Congress have expressed concern. Recent legislation proposed by the House Foreign Affairs Committee aims to address distillation attacks and has gained bipartisan support.

If US and allied policymakers can close these two channels—smuggling chips and overseas data center access, and illicit model attacks—Anthropic could gain a rare opportunity to lock in a significant lead.

Below, Anthropic describes two hypothetical future scenarios illustrating how current policies could shape the 2028 landscape.

Scenario 1: US compute advantage remains strong.
If policymakers tighten export controls now, reducing China’s ability to distill US models, and accelerate AI adoption domestically and abroad, the US could maintain a 12-24 month lead in frontier performance.

Scenario 2: China remains competitive near the frontier.
If policymakers do not push forward or relax restrictions on Chinese access to advanced compute, China could catch up or surpass the US.

Many in Congress and the Trump administration already support export controls, efforts to curb distillation, and promoting US AI exports. With these policies advancing, Anthropic hopes the US can secure a substantial lead before 2028, avoiding a close race with China two years later.

Anthropic expects frontier AI to profoundly impact the economy and society in the coming years, as described in Machines of Loving Grace and The Adolescence of Technology. Its mission is to ensure humans can safely and beneficially navigate this transition. Success could lead to breakthroughs in medicine, invention, and economic growth.

Whether this transition proceeds smoothly depends partly on which systems are built first. The industry ecosystem, regulatory environment, and governance frameworks surrounding advanced AI will shape the rules for development and deployment. These rules will influence whether AI is used for protection, whom it protects, and which interests it ultimately serves.

If the AI frontier is primarily driven by systems used for military advantage, cyber operations, social control, and information dominance, the risks and uncertainties of this technological shift increase.

Historically, large-scale governance and surveillance have been limited by human labor costs. Powerful AI could reduce these costs, enabling automated governance, detection, and decision-making at larger scales. China’s lead in AI could thus significantly influence global governance and security architectures.

China controls vast economic, military, and governance resources. It is the only other country besides the US with resource-rich, top-tier AI labs, and is approaching the frontier. Beijing has invested hundreds of billions of dollars into AI and semiconductor industries, aiming to become a leading AI superpower.

China has applied AI in information censorship, social governance, cyber defense, and security. Deployment of facial recognition, biometric data collection, and communication monitoring demonstrates AI’s potential for large-scale governance. Frontier AI systems will lower maintenance costs, expand coverage, and increase automation. As these technologies spread overseas, more countries may use AI to strengthen governance and surveillance, potentially transforming global norms and practices.

Frontier AI will influence future military power balances. China sees AI as a key factor for future warfare and is advancing military intelligence systems. Its strategists view “smartening” forces as essential to improve effectiveness. The military has begun deploying commercial AI systems, including models like DeepSeek for coordinating drone swarms and cyber operations.

These capabilities will not spread slowly. When a new model reaches new levels of autonomous targeting, vulnerability detection, or swarm coordination, the controlling party can deploy it within weeks, not years.

Risks compound as frontier AI accelerates other core technologies. Advanced models can compress R&D cycles in semiconductors, biotech, and materials science. Leading in AI will continually expand a country’s advantage across defense tech stacks.

If a Chinese lab develops a model comparable to Claude Mythos Preview before US labs, China will gain a system capable of autonomously discovering and exploiting software vulnerabilities, further boosting cyber capabilities. Future models will improve exponentially, posing greater risks to US and allied security interests.

China rapidly copying US AI via distillation

The US-China AI race could complicate efforts by industry and government to safeguard and govern AI. If Chinese labs follow US models or reach parity, both US and Chinese firms may rush to release new models and products before safeguards are fully in place. Governments might hesitate to implement responsible AI policies, fearing they will fall behind.

While more Chinese researchers now focus on AI safety, this has not yet translated into comparable safeguards. As of last year, only 3 of China’s top 13 AI labs published safety assessments; none disclosed risks related to chemical, biological, radiological, or nuclear threats (CBRN). CAISI found that under common jailbreak techniques, DeepSeek’s R1-0528 model responds to 94% of malicious requests, versus 8% for US models. Recent models show similar patterns; for example, a April evaluation of Moonshot’s Kimi K2.5 found it less likely to reject CBRN requests than US models.

More concerning, Chinese labs often release models with open weights that have dual-use military and civilian capabilities. Once weights are open, safeguards can be bypassed, enabling malicious actors to use these models for cyberattacks or CBRN misuse—despite safeguards designed to prevent such abuses.

Anthropic supports US and allied policies to establish safeguards and maintain a lead relative to China, in model intelligence, domestic adoption, and global distribution. This lead is vital for protecting national interests and preventing misuse. It is also essential for securing a favorable position in future global AI governance.

Anthropic respects the Chinese people and their AI achievements. It hopes for peaceful relations. Its concern is about the risks posed by any powerful nation gaining access to frontier AI, which could threaten safeguards and governance.

Whenever feasible, Anthropic supports international dialogue with Chinese AI experts on safeguards. Regardless of where AI is developed, the world shares common interests in safeguarding AI. Addressing risks requires US-China communication. Recognizing shared challenges and promoting ideas to prepare for and mitigate these risks benefits both.

When the US maintains a large lead, constructive engagement is most promising. Responsible leadership in developing and deploying advanced AI will enhance US influence over China and other regions’ safeguard practices.

Mythos Preview, released by Anthropic in April as part of Project Glasswing, signals that a phase of performance acceleration has arrived, making policy action even more urgent. After access was granted, Firefox patched more safeguard vulnerabilities last month than in all of 2025—almost 20 times the monthly average of 2025. A Chinese cybersecurity analyst described it as: “They’re sharpening their knives, while the other side suddenly raises a fully automatic Gatling gun.”

This acceleration toward “Genius in the Data Center”—the transformative AI intelligence level Anthropic envisions—brings the risk of rapid breakthroughs. This heightens the urgency of policy responses.

Such progress will bring AI performance close to the “Genius in the Data Center” scenario. Driven by the law of diminishing returns, as compute and data increase, performance improves predictably; AI is also increasingly used to accelerate model development.

Anthropic likely sees 2026 as the critical window for the US to achieve a breakthrough lead. US labs possess the most advanced models and a significant advantage in the quantity and quality of AI chips needed for frontier research, supported by substantial capital. Chinese labs have real strengths—world-class talent, abundant and cheap energy, and large datasets—but lack sufficient domestic compute capacity and funding to compete at the same level.

The US and China are engaged in a strategic contest over frontier AI. Public statements from both sides reflect this. Calling it a “race” might be misleading: it’s an ongoing competition for advantage. The ultimate outcome—whether democratic or authoritarian countries shape AI’s values, rules, and norms—depends on this long-term trajectory.

This contest unfolds along four fronts:

  • Performance: Which countries develop the most powerful AI models.
  • Domestic adoption: Which countries most effectively integrate AI into industry and government.
  • Global distribution: Which nations build the infrastructure supporting the global economy.
  • Resilience: Which countries maintain stability during economic transitions.

Two scenarios for 2028: US-led or parallel competition

Performance is the most critical front. Anthropic predicts that frontier model performance will have the deepest geopolitical impact. It also drives market adoption and global distribution.

But performance alone is insufficient. If China can more rapidly and effectively integrate near-frontier AI into its economy and safeguards, and promote low-cost, subsidized AI worldwide, it could offset any performance gap. Beijing’s “AI+” initiatives and focus on “embodied intelligence” reflect a policy priority to embed frontier AI into economic and national systems. The Trump administration’s AI export strategy similarly emphasizes promoting US AI technology globally for strategic advantage.

While this article does not focus on “resilience,” Anthropic considers it a key aspect of AI competition. Maintaining stability, cohesion, and good policy-making during this period will be a major advantage; failure to do so creates vulnerabilities.

Compute—advanced semiconductors for training and deploying frontier AI—is a core input across all fronts. The global AI leadership race is largely a compute race. Over the past decade, performance gains have mainly come from scaling compute. Most of AI’s historical improvements are due to larger-scale compute use.

Beyond training, compute is vital for inference—supporting user interaction. Whether training the most advanced models or deploying them commercially and militarily, compute is essential. Top talent, vast data, and algorithm breakthroughs are also crucial, but without sufficient compute, these efforts cannot reach their full potential.

Today, democratic countries lead in compute. Concerns exist that export controls might accelerate China’s efforts to develop domestic advanced chips, but there is little evidence that China’s self-sufficiency efforts threaten US and allied dominance. Before export restrictions, Beijing had already invested heavily in China’s chip industry and launched major policies like “Made in China 2025” and the National IC Industry Investment Fund. Despite these, Chinese labs and chipmakers remain constrained by US and allied export restrictions on advanced chips and manufacturing equipment.

As a result, the compute gap appears to be widening. An analysis of Huawei and NVIDIA’s product roadmaps shows Huawei’s total processing capacity in 2026 will be only about 4% of NVIDIA’s, dropping to 2% in 2027. More importantly, NVIDIA is just part of the US ecosystem; Google and Amazon are also rapidly developing their own chips (TPU, Trainium) to meet US frontier lab needs.

Further limiting China’s compute capacity are limited progress in the most complex segments of the semiconductor supply chain. Without access to EUV lithography—especially as policies tighten on DUV—Chinese manufacturers will struggle to produce chips at the necessary scale and quality. China also cannot mass-produce high-bandwidth memory, further widening the gap. A study estimates that if US restrictions tighten, the US could have roughly 11 times more compute than China’s AI industry.

The compute lead mainly results from two factors:

  1. Continuous innovation by US, Japanese, Korean, Taiwanese, and Dutch firms like NVIDIA, AMD, Micron, TSMC, Samsung, and ASML—building the unique technologies for the world’s most advanced semiconductors. Without decades of R&D and engineering breakthroughs, today’s AI would not be possible.

  2. Forward-looking policies by the US government over the past three administrations. Bipartisan efforts to restrict sales of advanced chips and manufacturing equipment to China have protected US and allied innovation. Anthropic’s CEO has emphasized the importance of export controls. These restrictions have limited sales of top-tier chips and equipment, constraining China’s AI progress despite heavy national investment. Without such measures, China could have developed AI models comparable or superior to US systems.

Some worry that restricting compute access might push China’s labs to innovate elsewhere, weakening US dominance. While Chinese labs are innovating, so far these efforts have not fully closed the compute gap. Algorithm improvements depend heavily on compute: more compute enables more experiments, leading to more breakthroughs. As frontier models participate more in R&D, this feedback loop will intensify, with models helping to develop their successors. In short, compute advantage will further translate into algorithmic advantage, securing a lasting AI lead.

Today, US frontier systems are estimated to be at least several months ahead of China’s top models in intelligence, though estimates are uncertain. While open-weight Chinese models have gained attention, they lag behind closed-source US models in enterprise use, and investors are concerned about commercialization. Chinese labs seem to be shifting away from open-source toward keeping their best models private.

Exploiting loopholes: chip smuggling and overseas data centers

Chinese AI leaders acknowledge the impact of export controls and the core need for US chips. Top Chinese AI executives worry that compute constraints will slow progress. They cite chip scarcity as a primary bottleneck and see export restrictions as a key cause. A senior executive at a major Chinese cloud provider said importing US chips subject to export controls would have a “huge, really huge” impact, and that any supply gap would severely hinder China’s AI development; he dismissed fears that importing US chips would slow China’s self-sufficiency efforts. The main voices claiming “export controls are ineffective” are mostly official statements and state media, possibly aimed at influencing US policy.

While export controls shape current advantages, their scope remains limited. China cannot produce enough advanced chips domestically nor legally purchase them abroad, but Chinese labs use two main workarounds to stay near the frontier:

  1. Smuggling chips into China or accessing overseas data centers.
  2. Illicit model access—distillation attacks on US models, collecting outputs to replicate performance.

China’s evasion of US export restrictions is well known. For example, US prosecutors accused a Supermicro co-founder and two others of transferring servers worth $2.5 billion with advanced US chips into China. Reports indicate DeepSeek trained its latest models using US chips banned from export to China. The Financial Times reports Alibaba and ByteDance are training flagship models in Southeast Asian data centers with US-restricted chips. Current restrictions mainly target chip sales, not remote access; thus, US controls struggle to block overseas access. The US system is trying to address China’s efforts to acquire advanced compute.

Distillation attacks are another method to approach US models and weaken export controls. Chinese labs set up fake accounts to bypass access controls, systematically collecting outputs to replicate frontier performance. This allows them to leverage decades of US research, billions invested, and top engineering talent to obtain near-frontier performance at minimal cost—effectively subsidized by US investments. From a long-term security perspective, this amounts to systematic industrial espionage. OpenAI, Google, Anthropic, and the Frontier Model Forum have publicly condemned distillation attacks.

Chinese AI experts openly acknowledge the scale and importance of distillation. A recent state media article called distillation attacks on US models a “backdoor” China’s labs rely on, calling it a core business practice. A former ByteDance researcher said Chinese labs use distillation as a shortcut, avoiding building their own data pipelines.

US policymakers have responded quickly. The White House OSTP issued a memo on distillation attacks. Senior officials in the White House, Pentagon, and Congress have expressed concern. Recent legislation from the House Foreign Affairs Committee aims to address distillation and has bipartisan support.

If US and allies can close these two channels—smuggling chips and overseas data center access, and illicit model attacks—Anthropic could gain a rare opportunity to secure a significant lead.

Below, Anthropic describes two hypothetical future scenarios illustrating how current policies could shape the 2028 landscape.

Scenario 1: US compute advantage remains strong.
If policymakers tighten export controls now, reducing China’s ability to distill US models, and accelerate AI adoption domestically and internationally, the US could maintain a 12-24 month lead.

Scenario 2: China remains competitive near the frontier.
If policymakers do not push forward or relax restrictions on Chinese access to advanced compute, China could catch up or surpass the US.

Many in Congress and the Trump administration already support export controls, efforts to curb distillation, and promoting US AI exports. With these policies advancing, Anthropic hopes the US can secure a substantial lead before 2028, avoiding a close race two years later.

Anthropic expects frontier AI to have profound economic and social impacts in the coming years, as described in Machines of Loving Grace and The Adolescence of Technology. Its mission is to ensure humans can safely and beneficially navigate this transition. Success could lead to breakthroughs in medicine, invention, and economic growth.

Whether this transition proceeds smoothly depends partly on which systems are built first. The industry ecosystem, regulatory environment, and governance frameworks surrounding advanced AI will shape the rules for development and deployment. These rules will influence whether AI is used for protection, whom it protects, and which interests it ultimately serves.

If the AI frontier is primarily shaped by systems used for military advantage, cyber operations, social control, and information dominance, the risks and uncertainties of this technological shift increase.

Historically, large-scale governance and surveillance have been limited by human labor costs. Powerful AI could reduce these costs, enabling automated governance, detection, and decision-making at larger scales. China’s lead in AI could thus significantly influence global governance and security architectures.

China controls vast economic, military, and governance resources. It is the only other country besides the US with resource-rich, top-tier AI labs, and is approaching the frontier. Beijing has invested hundreds of billions of dollars into AI and semiconductor industries, aiming to become a leading AI superpower.

China has applied AI in information censorship, social governance, cyber defense, and security. Deployment of facial recognition, biometric data collection, and communication monitoring demonstrates AI’s potential for large-scale governance. Frontier AI systems will lower maintenance costs, expand coverage, and increase automation. As these technologies spread overseas, more countries may use AI to strengthen governance and surveillance, potentially transforming global norms and practices.

Frontier AI will influence future military power balances. China sees AI as a key factor for future warfare and is advancing military intelligence systems. Its strategists see “smartening” forces as essential to improve effectiveness. The military has begun deploying commercial AI systems, including models like DeepSeek for coordinating drone swarms and cyber operations.

These capabilities will not spread slowly. When a new model reaches new levels of autonomous targeting, vulnerability detection, or swarm coordination, the controlling party can deploy it within weeks,

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pinned