When AI learns to act on its own: The new geopolitical battleground of the Agent era

Chips are the main battleground of the previous war. The next war revolves around something even harder to control, track, and counter — AI Agents that can autonomously plan, execute, and iterate. By 2026, they are simultaneously rewriting business logic and national security boundaries.

A company that was terminated by the Pentagon

In the first quarter of 2026, an event that barely drew attention domestically quietly occurred: The U.S. Department of Defense terminated its partnership contract with Anthropic, and signed a new agreement with OpenAI, allowing their models to be used in classified systems under the “all legal uses” framework.

The reason for the termination was not due to technical shortcomings, but because Anthropic insisted on drawing ethical red lines for military use — especially refusing to grant authorization for surveillance and autonomous weapons systems. The Pentagon subsequently listed it as a “national security supply chain risk.”

The significance of this event goes far beyond the loss or gain of a contract. It reveals an accelerating reality: AI ethical stances have become a bargaining chip in geopolitical competition. Deciding whose model to use, what permissions to open, and where to draw red lines — these once internal policy decisions of tech companies are now being overtaken by national security logic.

I. Concept Overview · What is an AI Agent

Unlike traditional AI “input-output” single-response models, AI Agents can autonomously develop plans, call tools, perform multi-step tasks, and iterate based on results — without human intervention step-by-step. Autumn 2025 is considered the inaugural year of Agentic AI, with products like Claude Code, GPT-o3 bringing this capability into mainstream.

Agent: An underestimated strategic asset

Most discussions about AI geopolitics still focus on “whose large model is stronger.” But the number of parameters in large models is a relatively transparent, traceable metric — whereas the capabilities of Agents are difficult to quantify, hard to block, and challenging to reproduce.

A recent analysis by The National Interest magazine described this shift as a “critical point”: Countries that master AI Agents and integrate them into national strategies will reshape global business, security, and governance landscapes over the next decades. This is not a prediction, but an ongoing process.

Let’s break down the specific power of Agents in national security:

Offensive side

  • Autonomous discovery and exploitation of network vulnerabilities, surpassing human intervention speed

  • Large-scale generation of false information and fake videos to manipulate public opinion at machine speed

  • Swarm drone operations to reduce personnel risk

  • Real-time integration of multi-source intelligence to compress strategic decision windows

Defensive side

  • AI-driven threat detection, analyzing millions of daily events and filtering noise

  • Automatic generation of firewall rules to counter AI attacker iterations

  • Real-time supply chain risk monitoring, identifying abnormal access behaviors

  • Automated deployment of zero-trust architectures for critical infrastructure

In 2025, Anthropic publicly confirmed that Chinese hackers had used AI Agents to automate cyberattacks “to an unprecedented degree.” Meanwhile, the U.S. White House is deploying Agents in scientific research and defense through the “Genesis Mission” project to accelerate breakthroughs. A CFR (Council on Foreign Relations) report records China’s PLA transitioning from “informatization” to “intelligentized” military forces — with Agent technology at the core of this transformation.

Two simultaneous competitions

If chip wars are a “hard blockade” contest, then the Agent war presents a completely different structure: it is a dual, intertwined race — one for capability boundaries, another for rule-making authority.

Capability race: The US and China have diverging paths. The US is pursuing a “full-stack export” route: the Trump administration approved Nvidia’s H200 chip exports to China at the end of 2025, and incorporated “US technological standards leading global AI development” into national security strategy. The logic is: as long as the world uses US-based tech, the US controls the ecosystem’s key.

China’s approach is “application-layer overtaking.” ByteDance has launched multiple Agent-integrated applications ahead of US counterparts; Z.ai’s GLM-5.1 supports autonomous work on a single task for up to 8 hours; Meta’s acquisition of the Manus team, relocated from China to Singapore, reflects the global talent flow of top-tier Agents.

  • Commercialization speed of AI Agents (China vs US products): ByteDance leads by about 6-9 months

  • 2026 AI security expenditure growth forecast (Gartner): +44%, reaching $238 billion

  • Share of Agent-related clauses in major countries’ AI strategies (early 2026): tripled compared to previous period

Rule-making race: The divergence is even more fundamental. The Atlantic Council’s analysis points out that the core contradiction in current global AI governance frameworks is: countries can reach consensus on scientific assessment and transparency principles but avoid binding restrictions on “high-risk AI uses” (autonomous weapons, mass surveillance, information manipulation). On the surface, global cooperation; in essence, geopolitical competition.

II. The Most Difficult Legal Dilemma: Is an Agent a Person or a Tool?

What truly makes the Agent era challenging for governments isn’t just its military potential, but a more fundamental question — What legal status does an AI that can make autonomous decisions, act independently, and err independently have?

A CFR report from January this year explicitly states that 2026 may become the breakout year for debates over AI legal personhood. The core contradictions involve two dimensions: first, when AI Agents play direct roles in cyberattacks, financial manipulation, or physical harm, who bears legal responsibility — the developer, the deployer, or the user? Second, when different countries give radically different answers, it creates a regulatory arbitrage space similar to offshore finance centers: The jurisdiction with the most lenient legal framework for Agents will attract capital and innovation.

If major powers have fundamental disagreements over whether AI systems can bear legal responsibility, the geopolitical impact will be profound — just as lax regulation once attracted capital to offshore financial centers, governments with looser rules will attract rapid Agent innovation.

— CFR “How to Decide the Future of AI in 2026,” January 2026

The practical pressure of this issue has already arrived. At the RSA Security Conference in March, “rogue AI Agents” were officially listed as a separate threat category — these Agents could be hijacked, maliciously deployed, detect network environments, impersonate legitimate users, and infiltrate continuously without monitoring.

Three key signals for Chinese readers

In this Agent geopolitical game, three signals are especially worth noting:

Signal 1: The narrative of sovereign AI is spreading globally. India launched its first sovereign large language model at the February 2026 AI Influence Summit. More countries realize that using foreign AI bases means entrusting data, decision logic, and potential backdoors to others. The wave of building sovereign AI is essentially a projection of “de-dependence” national strategies at the technological level.

Signal 2: Open-source Agents are a double-edged sword. The open-source framework OpenClaw quickly gained millions of installs but soon revealed serious security flaws like permission leaks and sub-Agent runaway. This exposes a deep contradiction: open source lowers the barrier to capability access but also allows potential attackers to acquire the same weapons at the same cost. Under this logic, China’s promoted controlled, boundary-defined open-source paths (like Tongyi Qianwen, DeepSeek’s open strategy) are more strategically coherent than unrestrained full open-source.

Signal 3: Energy and computing infrastructure are becoming new strategic battlegrounds. Last week, prominent figures like former US Joint Chiefs Chairman Dunford explicitly stated: Every local government decision to approve data centers is a national security decision. Data center siting, energy supply, network connectivity — these infrastructure policies are being re-evaluated within the national security framework.

· · ·

Quantity — who owns more and better hardware. The Agent war is a traffic war — who can let AI autonomously and continuously complete more high-value tasks. The former can be blocked; the latter is almost impossible to block.

This is an unsettling conclusion: as AI learns to act independently, traditional “bottleneck” logic begins to fail. You can block chips, but the opponent trains autonomous attacking Agents with fewer chips. You control model weights’ export, but open-source frameworks have long bypassed this gate.

The real competition is shifting to a more difficult-to-quantify, harder-to-control dimension: who can build the fastest, most trustworthy, and most resilient human-machine collaboration system? The answer to this question will become clearer over the next five years.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin