I just read an interesting report from the Wall Street Journal about how AI is truly being used on a large scale for the first time in real combat. The US-Israel military attack on Iran has become a kind of experimental lab for autonomous warfare technology - and the results are quite changing the way we think about modern warfare.



The most striking thing is the transformation in intelligence gathering. In the past, analysts could only review about 4% of all incoming intelligence material - basically drowning in data. Now with AI, they can process much larger volumes of information and find signals amid noise. Colonel Yishai Kohn from the Israeli Defense Ministry said AI has the biggest impact in intelligence - many investigative tasks that were previously impossible due to human limitations can now be done. They even managed to hack traffic cameras in Tehran and intercept communications, with AI helping to filter actionable insights from all that data.

AI machine vision can quickly identify targets from thousands of videos and photos, even distinguishing specific aircraft or vehicle models. Conntour's CEO explained that intelligence agencies already have vast amounts of video data, and AI now enables them to find exactly what they’re looking for.

But what’s even more impressive is the acceleration in mission planning. Traditional military operations require weeks of coordination among analysts, commanders, weapons experts, and logistics managers. Now? It can be just a few days. Every small change - for example, a target shifting location - triggers a cascade effect on pilot schedules, flight plans, and fuel consumption. AI can process all those complex interactions instantly and calculate their impact on the entire deployment. The Pentagon is increasingly using AI to run digital simulations and optimize target priorities by processing millions of solution iterations.

But this also reveals the dark side of the technology. War is one of the most chaotic and complex domains. Jack Shanahan, former AI chief at the Pentagon, highlighted that training data for military AI is often outdated or unclear. More seriously, errors in AI systems on the battlefield can be fatal. There are reports that intelligence mistakes likely caused dozens of children to be killed at a girls’ elementary school in Iran on the first day of the operation.

The most concerning issue is over-reliance on AI decision-making. Emelia Probasco from Georgetown’s Center for Security and Emerging Technology warned that delegating decisions to AI is a serious problem. Proper protective measures are needed to limit risks, but infrastructure investment is still far from adequate. In war, human judgment remains irreplaceable.
View Original
This page may contain third-party content, which is provided for information purposes only (not representations/warranties) and should not be considered as an endorsement of its views by Gate, nor as financial or professional advice. See Disclaimer for details.
  • Reward
  • Comment
  • Repost
  • Share
Comment
Add a comment
Add a comment
No comments