Too Embedded to Remove? This AI Model Was Used in Very Recent Strikes

Representational image of the war against the Iranian regime

This post is also available in: עברית (Hebrew)

Advanced artificial intelligence models have become increasingly integrated into defense planning and operational workflows, but their growing role is also exposing tensions between military requirements and regulatory decisions. A recent episode highlights how such systems may be deeply embedded in sensitive missions — even when political directives attempt to curtail their use.

The issue came to a head after a presidential directive ordered all federal agencies to halt use of the Claude AI model developed by Anthropic. The decision was part of a broader dispute between the U.S. government and the AI firm over restrictions in the model’s terms of service, which include limits on its use in autonomous weapons systems and mass domestic surveillance. Despite the ban, multiple reports indicate that U.S. military commands continued to employ the model during major strikes on Iran, using it for tasks such as intelligence analysis, target selection and battlefield simulations.

The challenge reflects a broader problem: once sophisticated AI tools are embedded in defense systems — whether for intelligence processing, simulation or operational planning — removing them quickly becomes complex. According to Interesting Engineering, AI models like this one were integrated through classified channels and workflows, making an abrupt switch difficult without disrupting mission-critical capabilities. Defense officials acknowledged this complexity while moving to phase out the technology over a defined transition period, rather than enforcing an immediate cutoff.

In practice, the model’s use in military scenarios underscores how large language models can support defense operations beyond simple text generation. During the reported strikes, the model provided analytic and simulation support to commanders, helping to refine targeting and anticipate battlefield dynamics. Such applications illustrate why AI systems have attracted interest as force multipliers in defense contexts, where processing vast amounts of data quickly can be a decisive factor.

For defense and homeland security stakeholders, this episode highlights both the operational value and the regulatory risks of adopting advanced AI technology. As armed forces and security agencies increasingly rely on AI for intelligence and planning support, questions about ethical use, governance and continuity planning will continue to shape procurement and deployment decisions.