I just came across a thought-provoking topic—AI in warfare. Recently, the United States and Israel have extensively used artificial intelligence in the Iran conflict, which might be the first true "AI war" in history.



In intelligence analysis, the biggest bottleneck of traditional methods is the overwhelming amount of data, which manpower can't keep up with. The U.S. military has said that analysts can usually only read a little over 4% of intelligence reports. Imagine the mountains of daily collected videos, audio recordings, and communication logs—manual screening is simply impossible. Yishai Kohn from the Israeli Defense Ministry mentioned in an interview that many potential military tasks are actually unfeasible because there aren't enough personnel to process that critical intelligence.

Now, with AI machine vision, it can quickly identify targets from vast amounts of videos and images, even distinguish specific aircraft models or vehicle types, and extract and summarize dialogue content from audio. Israel's intelligence agencies have long monitored Tehran's traffic cameras and high-ranking officials' communications; now, AI helps precisely find what they need within this ocean of data. The efficiency gains are obvious.

Even more interesting is task planning. Traditional military planning requires intelligence officers, combat commanders, weapons experts, and logistics personnel to sit together and discuss, often taking several weeks to finalize. With AI, this process can be compressed to a few days. Because once the target location changes, it triggers a chain reaction affecting personnel deployment, flight routes, fuel consumption, and many other factors. Previously, these updates were slow and prone to subjective bias. Now, AI can instantly process these complex interactions and calculate how each variable impacts overall military deployment. The Pentagon even uses AI to run models and digital war games, processing millions of iterations to quickly find the optimal plan to achieve objectives.

But this is where the problem lies. War is one of the most complex and ambiguous fields for humans. Jack Shanahan, a former head of military AI and retired Air Force general, warned that the large datasets used to train military AI are inherently outdated or unclear. If the system makes a mistake, the consequences could be deadly. Reports indicate that U.S. forces may have caused civilian casualties at a girls' school in Iran due to intelligence errors—that's the harsh reality.

The most dangerous aspect is over-reliance on machine decision-making. Emelia Probasco, a researcher at Georgetown University's Center for Security and Emerging Technology, issued a warning that handing decision authority over to AI is a "serious problem." There needs to be constraints to limit risks, but currently, investment in this infrastructure is clearly insufficient.

In plain terms, AI in warfare is like a double-edged sword—efficiency gains are real, but potential disasters are also real. In such high-risk scenarios, human judgment will always be irreplaceable.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments
  • Pin