The Pentagon Banned the AI Company That Built Its Targeting System. Then It Used the System to Strike 1,000 Targets in 24 Hours.
On February 26, Anthropic refused to remove Claude's ethical restrictions. On February 27, the DoD designated Anthropic a supply chain risk. On February 28, Claude processed over 1,000 targets in Operation Epic Fury. Three days. One AI model. Zero irony acknowledged.
Seventy-two hours. That's how long it took to go from "this AI company is too dangerous to do business with" to "this AI company's product is running our kill chain." On February 26, 2026, Anthropic formally refused the Pentagon's demand to remove ethical guardrails from Claude, its flagship AI model. On February 27, the Department of Defense designated Anthropic a supply chain risk and effectively banned the company from future contracting. On February 28, Claude was integrated into Palantir's Maven Smart System and used to identify and process over 1,000 military targets during Operation Epic Fury's opening salvo against Iran.
Nobody at the Pentagon has explained how you ban a vendor on Tuesday and deploy its product on Wednesday. Nobody has been asked.
How $200 Million Became a Loyalty Test
In July 2025, Anthropic signed a $200 million prototype agreement with the Department of Defense to integrate Claude into military intelligence workflows. At the time, this looked like a standard procurement deal. AI companies had been chasing defense contracts since Google's Project Maven fallout in 2018, when thousands of Google employees protested the use of AI in drone targeting and the company pulled out. Anthropic, founded by former OpenAI researchers who left partly over safety concerns, had always positioned itself as the responsible AI lab.
According to the Cloud Security Alliance's reconstruction of the contract timeline, the Pentagon's demands escalated rapidly. By late 2025, DoD officials wanted Anthropic to certify Claude for "any lawful use," including mass surveillance, autonomous targeting, and weapons system integration without ethical review gates. Anthropic's Acceptable Use Policy explicitly prohibited several of these applications.
On February 26, Anthropic said no. Specifically, the company refused to remove model-level restrictions that prevented Claude from generating target coordinates without human review, from processing surveillance data at scale without audit trails, and from operating in autonomous weapons loops. Twenty-four hours later, Anthropic was blacklisted.
Twenty-four hours after that, Claude was at war.
Operation Epic Fury: AI in a Kill Chain
A Georgetown professor's analysis published in Homeland Security Today confirmed what several Pentagon sources had leaked: Claude, operating through Palantir's Maven Smart System, processed over 1,000 targets in its first 24 hours of combat use. Satellite imagery, signals intelligence, and geospatial data were fed into the system. Claude identified targets. Human operators reviewed and approved strikes. Tomahawks, JDAMs, and stand-off munitions followed.
A thousand targets in a day. For context, during the 2003 invasion of Iraq, coalition forces struck approximately 1,700 targets in the entire first week. AI compressed that timeline by roughly 7x.
Pentagon officials emphasized the "human-in-the-loop" framework mandated by DoD Directive 3000.09, which requires "appropriate levels of human judgment" for autonomous weapons systems. AI recommends. Humans approve. But when the machine serves up a thousand recommendations in 24 hours, how long does a human spend on each one? Assuming continuous 24-hour operations with rotating staff, that's roughly 86 seconds per target package. Read the intelligence summary, check the coordinates, approve or reject, move on.
Eighty-six seconds is not review. It's ratification.
LUCAS: The $35,000 Weapon Nobody Saw Coming
While Claude handled targeting, a second AI system made its combat debut. LUCAS (Low-cost Unmanned Combat Aerial System) drones, built by SpektreWorks under a rapid acquisition contract, were deployed operationally for the first time on December 16, 2025, launched from the USS Santa Barbara as part of Task Force Scorpion Strike.
Each LUCAS unit costs between $35,000 and $43,000, depending on the source. It flies autonomously using AI flight controls and can coordinate in swarms without continuous human input. For comparison, a single Tomahawk cruise missile costs approximately $2.5 million. A Tomahawk destroys one target. A LUCAS drone destroys one target. One costs roughly 70 times more than the other.
Here is what that math looks like over 1,000 targets:
| Weapon System | Unit Cost | 1,000 Targets | AI Role |
|---|---|---|---|
| Tomahawk (BGM-109) | $2,500,000 | $2.5 billion | Target selected by AI |
| LUCAS drone | $35,000–$43,000 | $35–43 million | Target selected by AI, flight autonomous |
| Anthropic contract ceiling | $200 million (for the AI that selected both) | ||
Put differently: the AI contract that identified all those targets cost less than 100 Tomahawks. And the LUCAS drones that can execute strikes autonomously cost less per unit than a mid-range pickup truck. Pentagon procurement has always been expensive. This is the moment it got cheap.
Minab: What 86 Seconds Misses
On the morning of March 5, 2026, coalition forces struck a compound adjacent to a former IRGC (Islamic Revolutionary Guard Corps) base in Minab, a city in southern Iran's Hormozgan province. According to the OECD AI Incident Monitor and investigations by BBC Verify, NPR, and CBC, the target had been a military installation. It hadn't been one for over a decade. In 2014, the military compound was separated from the adjacent structure. That adjacent structure was a girls' school.
Between 168 and 180 people died, most of them schoolgirls. Investigators found evidence of a "triple tap" pattern: an initial strike, a follow-up strike minutes later, and a third strike after first responders arrived. Whether the targeting system flagged the school as a distinct structure from the former military base remains unclear. What is clear is that the geospatial data used by the AI system had not been updated since at least 2012.
Eighty-six seconds per target. Stale satellite imagery. A girls' school that used to be next to a military base. These are not bugs in the system. They are the system working exactly as designed, at exactly the speed it was designed to work.
Where Are the Rules?
In December 2024, the UN General Assembly passed Resolution 79/62, calling for a legally binding agreement to prohibit autonomous weapons systems by 2026. Secretary-General António Guterres backed a full ban. Over 100 nations supported the resolution.
It is now March 2026. No treaty exists. LUCAS drones are flying autonomous combat missions. Claude is processing targeting data at a rate no human intelligence team can match. DoD Directive 3000.09 requires human judgment in the loop, but 86 seconds per target package is the kind of human judgment that exists on paper and disappears in practice.
Meanwhile, Anthropic, the company that actually tried to enforce restrictions, got fired for it.
Strongest Counterargument
AI targeting may have saved lives. A thousand targets processed in 24 hours with AI precision could produce fewer civilian casualties than the same campaign run over weeks by human analysts working with imperfect intelligence and fatigue. Conventional air campaigns have historically killed far more civilians per target destroyed. Operation Epic Fury's civilian casualty ratio, even including Minab, may turn out to be lower than comparable campaigns in Iraq, Libya, or Kosovo when complete data emerges.
DoD Directive 3000.09 does require human judgment, and the Pentagon claims it was followed. If Claude isn't used, a less safety-conscious model will be. Anthropic's restrictions may have been principled but impractical: the Pentagon would simply switch to a vendor with fewer scruples. And "any lawful use" is technically correct: striking enemy military installations is lawful under international humanitarian law.
All of that is probably true. None of it explains how you ban a company on Tuesday, deploy its product on Wednesday, and bomb a girls' school five days later.
Limitations
Several facts in this analysis cannot be independently verified. Anthropic has not publicly confirmed the specific demands it refused, and our reconstruction relies on the Cloud Security Alliance's reporting, which cites unnamed Pentagon officials. Exact casualty figures at Minab range from 168 (Iranian state media) to 180 (BBC Verify), and a full independent investigation has not been completed. Our 86-seconds-per-target calculation assumes continuous 24-hour operations with no downtime, no batch processing, and uniform time allocation per target; actual review times vary based on target complexity. Cost comparisons between LUCAS and Tomahawk missiles simplify significantly: the platforms have different ranges, payloads, survivability profiles, and mission requirements. We do not have access to the specific Claude model version used, the exact Palantir Maven configuration, or the audit logs that would show what human review actually looked like in practice.
The Bottom Line
For decades, the autonomous weapons debate was theoretical. Researchers published papers. The UN convened working groups. Activists warned about "killer robots." Everyone agreed the conversation was important. In February 2026, while the conversation was still happening, the weapons showed up. An AI model identified over 1,000 targets in a day. Autonomous drones flew combat missions for $35,000 a pop. A girls' school in Minab became one of the first confirmed civilian casualties of algorithmic warfare. And the company that tried to put guardrails on the technology was punished for it. The debate about whether AI should be used in warfare is over. Not because anyone resolved it. Because the weapons didn't wait.
Sources
- HSToday — Algorithmic Warfare in the Iran Conflict: Operation Epic Fury and Dawn of the AI Battlefield (Georgetown professor analysis; Claude integrated via Palantir Maven, 1,000+ targets in 24 hours)
- Aerospace America / AIAA — Use of LUCAS Drones in Iran Puts Focus on Affordable, Fast-Moving Acquisition (SpektreWorks, $35,000/unit, first operational deployment Dec 2025)
- Cloud Security Alliance — DoD AI Guardrail Mandates: Vendor Governance (Anthropic $200M contract timeline, Feb 26 refusal, Feb 27 designation, Feb 28 deployment)
- OECD AI Incident Monitor — AI-Driven Targeting in Iran Leads to Civilian Harm (Minab school attack, algorithmic targeting errors)
- Wikipedia — 2026 Minab School Attack (aggregating NYT, BBC Verify, NPR, CBC investigations; 168–180 killed)
- UNRIC — UN Addresses AI and the Dangers of Lethal Autonomous Weapons Systems (General Assembly Resolution 79/62, Dec 2024)
- Defence.ai — DoD Updates Directive 3000.09 (autonomous weapons policy, human judgment requirements)
- GPS World — Attack Drones Deployed in the Iran Conflict (LUCAS specifications, $43K alt. estimate, USS Santa Barbara sea launch)