The Real AI Arms Race Nobody Is Talking About
NEWSFLASH ITS NOT ALL ABOUT OPENAI AND ANTHROPIC
Last week the world watched Anthropic lose a Pentagon contract for refusing to let its AI make autonomous targeting decisions. The debate that followed focused almost entirely on two companies - Anthropic and OpenAI - and one question: should there be a human in the loop.
It was the wrong question. And the companies who needed it to stay that narrow said nothing.
The Stack
To understand what is actually being built you need to understand how the AI weapons stack works. It has three layers.
At the top are the frontier AI labs. Anthropic, OpenAI, Google DeepMind. They build the large language models - the intelligence layer that does reasoning, planning, and decision support. This is where the Anthropic debate lived. These are the companies with published principles, public safety policies, and CEOs who give TED talks about existential risk.
In the middle are the targeting and data intelligence companies. This is where the actual kill chain gets built. Systems that fuse sensor data, identify threats, and generate targeting options at speeds no human analyst can match.
At the bottom are the hardware manufacturers. The companies that build the platforms the intelligence runs on. Drones, missiles, autonomous aircraft, naval vessels.
The debate last week happened entirely at the top of the stack. The companies in the middle and at the bottom had a very quiet week.
Palantir: The Kill Chain
Palantir does not make missiles. What it makes is more consequential. It builds the intelligence infrastructure that decides where the missiles go.
Its TITAN system - Tactical Intelligence Targeting Access Node - is a mobile ground station that fuses data from space sensors, satellites, and battlefield intelligence to generate targeting options for Army units. The contract to build ten of these systems was worth $178 million. The Army’s ten-year enterprise agreement with Palantir, signed in July 2025, is worth up to $10 billion.
The Maven Smart System, which Palantir also runs, is the Pentagon’s primary AI targeting platform. The Department of Defense boosted that contract by $795 million in 2025 to meet growing demand from combatant commands using it to manage dynamic operations across entire theatres. Palantir has also signed a £1.5 billion agreement with the UK Ministry of Defence to develop capabilities to improve targeting and support what the British government explicitly called the kill chain.
Palantir’s CEO Alex Karp has never pretended otherwise. He has been publicly explicit that western technology companies have a moral duty to help their governments win wars. No principles to walk back. No safety policy to rewrite. No Tuesday press release softening commitments before a Friday deadline.
Anduril: The Hardware
Palmer Luckey sold Oculus to Facebook at 21. Facebook fired him amid controversy about his support for Trump. He spent the next decade building Anduril, now valued at $30.5 billion, which is constructing Arsenal-1 - a five million square foot manufacturing facility in Ohio that will begin producing autonomous weapons systems at industrial scale from July 2026.
Anduril builds the physical systems. Autonomous drones. Counter-drone interceptors. Loitering munitions. Underwater vehicles. Cruise missiles. Its Fury prototype is currently undergoing armed flight testing with the US Air Force’s Collaborative Combat Aircraft programme - autonomous wingman jets designed to fly alongside manned fighters, carry weapons, and execute strike missions.
The Fury drone carried an AIM-120 air-to-air missile during flight testing in February 2026. A human retains weapons release authority for now.
Anduril’s autonomous weapons have not always worked. More than a dozen drone boats failed during Navy exercises in May 2025. Ukrainian forces stopped using its Altius loitering drones after repeated battlefield failures. A counter-drone test caused a 22-acre fire in Oregon. The company raised $2.5 billion in June 2025 regardless.
Shield AI: The Pilot
Shield AI builds the autonomy software that flies military aircraft without GPS and without human input. Its Hivemind platform has been deployed in drones operating in Ukraine and is now being integrated into Anduril’s CCA prototype. The goal is an AI pilot that can operate in contested electronic warfare environments where human remote control is jammed or blocked.
Shield AI is valued at $5.3 billion. It has never published an AI safety policy.
Lockheed Martin: The Institution
While Palantir, Anduril, and Shield AI are the new generation, Lockheed Martin has been building AI into weapons for decades. Its PAC-3 missile defence interceptor has had embedded AI since inception. Its LRASM anti-ship missile uses AI for autonomous target identification, route planning, and attack coordination at ranges approaching 1,000 miles. Its Skunk Works division is developing autonomous combat aircraft. AI is embedded across its Aegis naval combat system, its F-35 programme, and its long-range strike portfolio.
On its own website, Lockheed Martin describes itself as the industry benchmark for responsible AI.
It has $67 billion in annual revenue. It was not mentioned once in last week’s debate about AI safety.
Where OpenAI and Anthropic Actually Sit
Both companies sit at the top of the stack. They provide the reasoning layer - the models that interpret intelligence, draft assessments, and support decision-making. Neither is directly building autonomous weapons. What they provide is the cognitive infrastructure that makes the systems built by Palantir, Anduril, and Lockheed faster, more accurate, and more lethal.
Anthropic refused to let Claude be used without restrictions across that stack. It drew two lines. No fully autonomous weapons. No mass domestic surveillance. It lost the contract.
OpenAI took the contract with the same two protections written in. The company that built Stargate, took $110 billion from SoftBank, Nvidia and Amazon, and whose co-founder donated $25 million to Trump’s political PAC, now has its models running on the Pentagon’s classified networks alongside Palantir’s targeting systems and Anduril’s autonomous aircraft.
Whether those protections survive the relationship is a different question.
They Just Build
The companies building autonomous weapons have mastered something the AI labs have not. They do not publish principles. They do not hold safety summits. They do not write blog posts about the existential risks of the technology they are scaling.
They just build.
Anduril is constructing a five million square foot autonomous weapons factory. Palantir has a $10 billion Army contract and a seat at the centre of every major western military’s targeting infrastructure. Shield AI is putting an autonomous pilot into a jet fighter. Lockheed Martin has AI in missiles that select and engage targets without human input.
Here is what gets lost in the debate about principles and safety policies and who has a human in the loop.
Weapons kill people.
Not data systems. Not intelligence platforms. Not autonomy software. The thing at the end of the stack - the missile, the drone, the autonomous aircraft carrying an AIM-120 - kills a human being. And every layer of AI sitting above it, making targeting faster and decisions more confident, is operating on models that misidentify targets, hallucinate threat assessments, and produce outputs that even their own developers cannot fully explain.
We know this because the evidence is already there. A war game simulation found large language models chose nuclear options in 95 percent of test runs when objectives were loosely defined. Anduril’s drones crashed in Ukraine. Its drone boats failed in Navy exercises and sailors warned of potential loss of life. These are not edge cases from a technology still finding its feet. This is the technology being scaled to industrial production.
The companies publishing safety policies were not the ones building the weapons. The ones building the weapons never needed a safety policy because nobody asked them for one.



