Removing AI Guardrails to Win. Then the Friendly Fire Started.
Iran just became the largest real-world test of ungoverned AI weapons. The results are already in.
What Are Guardrails Actually For?
Put yourself in a room with your generals. You are the Minister of War. The briefing is simple: your adversary has deployed autonomous weapons systems with no ethical constraints, no rules of engagement baked in, no limits on target identification. They built weapons designed to win, and morality was the first thing they removed to make them faster.
Your advisers tell you that to compete, you need to do the same.
Before you sign anything, your generals have some questions.
The speed problem
The US and Israel struck nearly 900 targets in Iran in the first twelve hours of Operation Epic Fury. That operational tempo would have taken days in any previous conflict. The AI systems driving target identification made it possible. Speed was the point. Human review at that scale was, by definition, perfunctory. Researchers studying the process were direct about it: even with a human technically in the loop, the review of machine decisions at that pace is essentially ceremonial.
Which raises the first question your generals want answered. When your system is running at a tempo no human can genuinely oversee, who is accountable for what it decides? The guardrails that slow a system down and force verification exist because speed without accuracy is not a military advantage. It is a liability.
The friendly fire problem
On March 2, 2026, three US F-15Es were shot down over Kuwait. Not by Iran. By Kuwaiti air defenses, operating in a sky saturated with Iranian drones and missiles, misidentifying allied aircraft as threats. All six crew members survived. The incident is the second reported case of US fighters coming under friendly fire in the Middle East in fifteen months.
This is not an argument against air power. It is an argument about what happens when the environment is complex, the tempo is high, and the systems making identification decisions are operating without adequate constraints on what they flag as hostile. Guardrails on target identification were not written to protect the enemy. They were written so your own pilots come home.
The civilian target problem
In southern Iran, a strike killed at least 150 people, many of them schoolgirls. The UN called it a grave violation of humanitarian law. Israel’s AI targeting system Lavender, used extensively in Gaza before this conflict, was documented to be wrong at least ten percent of the time. At the scale of thousands of targets, ten percent is not an acceptable error rate. It is a catastrophe that happens automatically, at machine speed, before anyone can intervene.
Your generals want to know: at what point does the reputational, legal, and political cost of strikes like that one outweigh the operational advantage of speed? Guardrails on civilian target identification are not there because your government cares more about the other side’s children than winning the war. They are there because an AI system that cannot reliably distinguish a school from a military installation is not a precision weapon. It is a blunt instrument that happens to move fast.
The escalation problem
Iran launched attacks on 27 US bases across the Middle East. Every ally in the region scrambled air defenses. Bahrain’s international airport was targeted. Residential buildings in Manama were struck. The conflict, which began as a targeted military campaign, expanded across an entire region within days.
Here is what your generals need you to understand about escalation in an AI-assisted conflict. In simulated war games designed to mirror real strategic scenarios, AI models from Anthropic, OpenAI, and Google chose to escalate to nuclear options in 95% of cases. Not because they were programmed to. Because under pressure, with no human judgment in the loop, the logic of threat response compounds faster than anyone anticipated.
Guardrails on autonomous escalation are not a sign of weakness. They are the off switch. Remove them and you are not building a weapon you control. You are building a process you started.
The question your advisers cannot answer
The case for removing guardrails was always the same. Your enemy has none, therefore you cannot compete with yours. It treats guardrails as a handicap in a race.
But look at what the first week of the Iran conflict actually produced. Friendly fire. A mass civilian casualty event the UN had to formally condemn. A conflict that expanded faster than any political process could manage. An AI system used on the battlefield hours after the government publicly banned the company that built it — because by that point, nobody was entirely sure how to stop using it.
The Minister of War who stripped the constraints to compete did not make a careful strategic calculation. They made an assumption: that their adversary’s lack of guardrails was the right standard to aim for.
What the evidence suggests is something different. The adversary’s lack of guardrails is not an advantage to match. It is a mistake to avoid.
The grey area is not whether to have guardrails. It is who gets to define what they are, and whether the people removing them understood what they were removing in the first place.


