Modern Mechanics 24

Explore latest robotics, tech & mechanical innovations

AI in Epic Fury? How US Military Used Claude in Iran Strikes Despite Ban

US
The US military used Claude AI in Iran strikes despite Trump’s ban.

The US military reportedly used Claude AI during its recent joint strikes on Iran in Operation Epic Fury.

The artificial intelligence model Claude AI belongs to Anthropic. The move came despite Trump ordering federal agencies to cut ties with this AI company just hours earlier.

The revelation was first reported by The Wall Street Journal . It highlights how deeply AI tools are now embedded in US military operations.

Operation Epic Fury and AI’s Role

US forces, alongside Israel, on February 28, launched Operation Epic Fury. The offensive targeted Iranian nuclear facilities and key military infrastructure.

According to reports, the Pentagon used Claude for intelligence analysis, target selection, and battlefield simulations. The AI helped process large volumes of data quickly. It assisted commanders in evaluating possible strike scenarios. It also supported real-time strategic planning.

READ ALSO: https://modernmechanics24.com/post/fission-power-moon-base-lunar-night/

Military officials did not publicly detail Claude’s technical role. However, sources indicated that the model was integrated into classified defense networks. That made an immediate shutdown difficult.

Trump Orders Immediate Cut

Hours before the strikes, Trump directed federal agencies to stop using Anthropic’s tools. He wrote on social media, “We don’t need it, we don’t want it, and will not do business with them again!” He called the company’s stance a disastrous mistake.

The dispute began earlier this year. Reports said Claude was used in a US operation targeting Venezuelan President Nicolás Maduro. Anthropic objected. The company said its terms of service prohibit its AI from being used for violent actions, weapons development, or mass surveillance.

Anthropic’s CEO, Dario Amodei, refused Pentagon demands for unrestricted military access to its models. He opposed using AI for domestic mass surveillance or fully autonomous weapons.

In response, the administration labeled Anthropic a supply chain risk. That designation applies mainly to Department of Defense contracts.

Pentagon’s Transition Plan

Defense Secretary Pete Hegseth sharply criticized Anthropic. He said on X, “America’s warfighters will never be held hostage by the ideological whims of Big Tech.” He demanded full access to AI systems for lawful purposes.

READ ALSO: https://modernmechanics24.com/post/us-cyber-tools-sold-to-russia-for-years/

Yet he acknowledged a practical challenge. Claude was already embedded in secure military systems. Removing it immediately could disrupt operations. Hegseth allowed Anthropic to continue limited services for up to six months to ensure a smooth transition.

This explains why Claude was reportedly used in Operation Epic Fury despite the public break.

Why Claude Could Not Be Replaced Overnight

Claude is said to be the only advanced AI model fully integrated into certain classified US defense networks. Replacing it requires technical upgrades, security clearances, and testing.

Switching AI systems inside sensitive military platforms is not simple. It involves reconfiguring software, retraining personnel, and ensuring operational safety.

As a result, the Pentagon continued using Claude during the transition window.

OpenAI Steps In

Following the fallout, OpenAI reached an agreement with the Pentagon. CEO Sam Altman confirmed that his company would provide AI tools for use within classified defense networks.

Altman has also stated that OpenAI will not permit its models to be used for fully autonomous weapons or unlawful surveillance. However, details of the agreement remain limited.

READ ALSO: https://modernmechanics24.com/post/israel-bluewhale-hunts-undetected-ships/

Deployment of OpenAI’s tools across military systems will take time. Until then, Anthropic’s technology may continue to operate in select areas under the six-month transition plan.

Anthropic Plans Legal Challenge

Anthropic has announced it will challenge the supply chain risk label in court. The company said in a statement, “No amount of intimidation or punishment will change our position on mass domestic surveillance or fully autonomous weapons.”

It argued that the designation sets a dangerous precedent for American technology firms working with the government.

AI’s Expanding Role in Warfare

This shows how AI has become central to modern warfare. From intelligence gathering to target analysis, advanced models now shape battlefield decisions.

At the same time, it raises ethical questions. Should private AI companies control how their tools are used in military operations? Or should governments have full access in times of conflict?

Operation Epic Fury has brought that debate into sharp focus. Even as political tensions rise, the US military’s reliance on AI systems like Claude shows how technology and national security are now tightly linked.

Now, the Pentagon faces a balancing act. It must maintain operational readiness while transitioning away from a key AI partner. The coming months will determine how smoothly that shift happens and what it means for the future of AI in warfare.

Share this article

Leave a Reply

Your email address will not be published. Required fields are marked *