The U.S. military is reportedly using Claude AI for airstrikes in Iran, responsible for intelligence assessment and target identification. Experts issue a stern warning.

robot
Abstract generation in progress

Artificial intelligence is quietly intervening in real war decision-making.

According to The Wall Street Journal, the U.S. Central Command used Anthropic’s Claude AI to perform intelligence assessment, target identification, and battlefield simulation during an airstrike against Iran. However, the specific meaning of “target identification”—whether Claude was marking strike locations or estimating casualties—remains undisclosed, and no one is legally obligated to make such information public.

Notably, just hours before the airstrike, Trump ordered federal agencies to cease using Claude, but the tool is deeply integrated with Pentagon systems, making a switch difficult in the short term. Claude was also used in January for operations targeting Maduro.

Meanwhile, the Pentagon is in a direct standoff with Anthropic over access to classified systems. Anthropic CEO Dario Amodei refused the Department of Defense’s demand for “unrestricted access,” insisting on red lines against “mass surveillance of Americans” and “fully autonomous weapons.”

Academic simulations show that mainstream AI large models, in high-risk adversarial scenarios, ultimately choose to use nuclear weapons in 95% of cases. The combination of these factors has sharply increased concerns over the militarization of AI.

Claude’s Involvement in Airstrikes Remains a Mystery

AI has long been used to analyze satellite images, detect cyber threats, and guide missile defense systems.

But what’s more concerning now is that the same underlying technology used by billions daily for writing emails is being directly integrated into battlefield decision-making chains.

In November last year, Anthropic partnered with data analytics firm Palantir Technologies to make Claude a reasoning engine for military decision support systems.

In January this year, Anthropic submitted a $100 million proposal to the Pentagon to develop voice-controlled autonomous drone swarms—using Claude to translate commanders’ intentions into digital commands and coordinate drone formations. The proposal was ultimately rejected, but the contract details went far beyond simple intelligence reports, including “target perception and sharing” and full control of drone swarms from “launch to termination.”

Despite this, how these systems actually operate remains opaque. Bloomberg columnists note that AI companies refuse to disclose training data and reasoning paths, and the secrecy surrounding military applications adds another layer of concealment.

Pentagon Demands Unrestricted Access, Anthropic Refuses

Reports indicate that Defense Secretary Pete Hegseth explicitly demanded models “not constrained by policies or limitations on legitimate military use” in a memo, issuing a final deadline: by 5 p.m. Friday, submit unrestricted usage rights, or face serious consequences.

Amodei wrote in a blog post, “We cannot ethically agree to their demands.” In a statement, Anthropic pointed out that the latest Department of Defense contract language, which “appears to be a compromise,” combined with “legal terminology,” could allow the company’s guardrails to be ignored at any time.

The Pentagon responded strongly. Defense Department spokesperson Sean Parnell stated on X that the department has no intention of conducting mass surveillance of Americans or developing autonomous weapons without human oversight, but emphasized, “We will not let any company dictate how we make combat decisions.” Reports also indicate that the Pentagon has threatened to invoke the Cold War-era Defense Production Act to forcibly requisition the Claude model, and has asked defense contractors like Boeing and Lockheed Martin to assess their reliance on Anthropic, preparing to list it as a “supply chain risk.”

Amodei pointed out the logical contradictions: “These threats are self-contradictory: one labels us as a security risk; the other claims Claude is vital to national security.” He added that if the Pentagon decides to abandon Anthropic, the company “will work to facilitate a smooth transition to another supplier.”

Simulation Data: AI Models Choose Nuclear Strike in 95% of Cases

Anthropic’s concerns about “fully autonomous weapons” are supported by recent academic simulations.

It was revealed that King’s College London researcher Kenneth Payne led a highly realistic war game where ChatGPT-5.2, Claude Sonnet 4, and Gemini 3 Flash competed against each other. Over 329 rounds, none of the models surrendered; in 95% of cases, they ultimately chose to deploy nuclear weapons.

James Johnson of the University of Aberdeen warned, “From a nuclear risk perspective, these findings are alarming.” He cautioned that unlike cautious human decision-makers in high-stakes situations, AI models tend to amplify each other’s responses, potentially leading to catastrophic outcomes. Experts note that for machines, “nuclear taboo” constraints are far weaker than for humans, which is a core reason Anthropic refuses to relax restrictions despite heavy penalties.

Regulatory Vacuum: Absence of Rules and Accountability Frameworks Needed

Bloomberg analysis points out that many risks associated with AI systems on the battlefield are accumulating in a regulatory vacuum. The “hallucination” problem of large AI models stems from their training mechanisms—models are incentivized to produce answers rather than admit uncertainty. Some scientists believe this flaw may never be fully fixable.

The Israeli use of AI in Gaza, called “Lavender,” provides a cautionary example. Israeli media report that the system has a 10% error rate, leading to about 3,600 misidentified targets. Mariarosaria Taddeo, professor at Oxford Internet Institute, said, “These systems are incredibly fragile and unreliable, and war is such a dynamic, sensitive, and life-critical domain.”

At the institutional level, Article 36 of the Geneva Conventions requires new weapons to be tested before deployment, but AI systems that continuously learn from the environment effectively become new systems after each update, making this requirement nearly impossible to enforce.

Elke Schwarz of Queen Mary University of London warned that AI in warfare often aims to “accelerate decision-making,” which is precisely a pathway to adverse outcomes—faster decisions mean larger scales and less human oversight.

Bloomberg cites the example of armed drones used by the U.S. since 9/11. Over nearly 15 years, leaks, media pressure, and legal challenges led to the Obama administration publicly releasing drone strike casualty figures in 2016, though these are widely believed to be underreported. AI regulation will be even more difficult, requiring greater public and legislative pressure to establish transparency mechanisms similar to those proposed during the Trump era.

Key questions remain unresolved: “As a society, we have yet to decide whether machines can determine if a person should be killed,” Taddeo said.

Risk Warnings and Disclaimer

Market risks are present; invest cautiously. This article does not constitute personal investment advice and does not consider individual users’ specific investment goals, financial situations, or needs. Users should consider whether any opinions, views, or conclusions herein are suitable for their circumstances. Investment is at your own risk.

View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
0/400
No comments
  • Pin

Trade Crypto Anywhere Anytime
qrCode
Scan to download Gate App
Community
English
  • 简体中文
  • English
  • Tiếng Việt
  • 繁體中文
  • Español
  • Русский
  • Français (Afrique)
  • Português (Portugal)
  • Bahasa Indonesia
  • 日本語
  • بالعربية
  • Українська
  • Português (Brasil)