The Benchmark
Lockheed Martin adopted the rules for responsible AI in weapons systems, declared themselves the industry standard, and has never been asked to explain the gap between the principles and the products.
In February 2020, the Department of Defence adopted a set of ethical principles for artificial intelligence. They were developed over fifteen months by the Defense Innovation Board - an independent advisory committee that consulted over 100 experts, held public hearings, reviewed nearly 200 pages of public comment, and ran a classified war game before finalising its recommendations.
The first principle is the most important. Human beings should exercise appropriate levels of judgment and remain responsible for the development, deployment, use, and outcomes of DoD AI systems. The board was explicit about what this meant. You cannot blame the machine. The humans who built it and deployed it carry the responsibility for what it does.
Lockheed Martin was the first aerospace and defence contractor to adopt these principles. They say so on their own website, where they also describe themselves as the industry benchmark for responsible AI.
This is what the benchmark looks like.
PAC-3
Every PAC-3 missile interceptor ever fielded has had AI embedded since the programme’s inception. Not recently added. Not in development. Since inception. The system uses AI to detect, track and engage incoming threats at speeds no human operator can match. Lockheed is now upgrading the PAC-3 MSE with machine learning that adapts to new threat behaviour in real time, while the threat is still in the air.
The PAC-3 has been deployed across US allied forces in Europe, the Middle East and Asia. It has been used in active combat. The AI making engagement decisions in those systems has never been the subject of a congressional hearing.
LRASM
The Long Range Anti-Ship Missile uses AI for autonomous target identification, route planning, and attack coordination at ranges approaching 1,000 miles. It can acquire its target without relying on GPS or external data links, using onboard sensors to locate ships and distinguish combatants from neutral vessels in crowded waters.
It carries a 1,000-pound warhead.
The C-3 variant, currently in development, introduces advanced machine learning to enhance autonomous mission planning and target discrimination in intense electronic warfare environments. There is no human making the targeting decision at the moment the missile selects its target. The human authorised the launch. The AI selects what dies.
Aegis
The Aegis Combat System has been on US Navy surface ships since 1983. Lockheed has been embedding AI and machine learning into Aegis to assess threats and support engagement decisions at data speeds no human analyst can process.
In 2024, Lockheed used AI to push real-time capability updates to Aegis-equipped destroyers operating in the Red Sea, enabling them to counter Houthi drone and missile attacks. The updates went from development to deployment in days.
Lockheed recently secured $3.1 billion in contracts to continue Aegis development. The scope of that work explicitly covers capability improvements across all phases of the fire control loop. That phrase - fire control loop - means the sequence from threat detection to weapons release. AI is embedded across it.
On the Aegis AI page of their own website, Lockheed writes: “Our AI moral compass will say a great deal to future generations about how we balanced sensible concerns.”
F-35 Project Overwatch
In early 2025, Lockheed flight-tested an AI combat identification system integrated into the F-35’s information fusion system at Nellis Air Force Base. It was the first time a tactical AI model generated an independent Combat ID directly on a pilot’s display during flight.
The system resolves target identification ambiguities faster than a pilot can process the raw data. It tells the pilot what to shoot at before the pilot has worked it out alone. Lockheed describes this as reducing decision latency.
The pilot retains weapons release authority for now.
Autonomous HIMARS
HIMARS fires GPS-guided rockets at ranges up to 300 kilometres. In December 2024, Lockheed demonstrated a driverless HIMARS launcher that navigated without a crew using non-emitting sensors, operating in day and night conditions without human input.
The Army’s vision is a manned HIMARS paired with an autonomous wingman launcher. Full autonomous mission planning capability is in development.
Generative AI Command and Control
Lockheed is integrating generative AI agents to automate command and control across thousands of battlefield assets simultaneously - sensors, shooters, platforms across all domains, managed at speeds and scales no human command structure can replicate.
The Collaborative Combat Aircraft programme is building autonomous drone wingmen for the F-35 - armed autonomous aircraft designed to fly alongside manned fighters and execute missions in contested environments.
The Principle and the Product
The Defence Innovation Board’s first principle requires that humans exercise appropriate judgment and remain responsible for AI outcomes. Lockheed Martin adopted that principle, built their responsible AI brand around it, and describes their compliance with it as the industry standard.
Then consider what they actually built.
A missile that selects its own target at 1,000 miles. An interceptor with AI making engagement decisions since inception. A naval combat system with AI embedded across the entire fire control loop. A fighter jet with AI resolving targeting ambiguities faster than the pilot. A driverless artillery launcher. Generative AI managing thousands of battlefield assets without human oversight at the decision level.
The DIB was explicit: you cannot blame the machine. The humans who built it carry the responsibility. Lockheed Martin built all of this, adopted the principles that were supposed to govern it, declared themselves the benchmark, and has never once been asked to explain the gap between the two in a public forum.
No congressional hearings. No supply chain risk designations. No coverage in the week that Anthropic lost a Pentagon contract for refusing to let its AI make autonomous targeting decisions without a human in the loop.
The History
This is not a company with a clean record on accountability. The Foreign Corrupt Practices Act - the US law that makes it illegal for American companies to bribe foreign government officials - was written in 1977 specifically because of Lockheed. The company had paid $22 million in bribes to officials in West Germany, Italy, the Netherlands, Japan and Saudi Arabia to secure aircraft contracts. A Dutch prince. A Japanese prime minister. Defence ministers across Europe.
Since then, Lockheed has accumulated at least 81 instances of alleged misconduct and admitted fraud. A $28.4 million FCPA penalty in 1995 for bribing an Egyptian official. A $27.5 million settlement in 2015 for charging the US government for work performed by unqualified staff. Overbilling across multiple programmes spanning decades.
This is the company that sets the standard for responsible AI in the defence industry.
What This Is Really About
The debate about AI and autonomous weapons has been captured by the companies that publish principles. Anthropic published principles. OpenAI published principles. They became the subject of congressional scrutiny, press coverage, and Pentagon supply chain risk designations precisely because they put their commitments in writing and could therefore be held to them.
Lockheed Martin published principles too. They adopted the DoD framework, built a responsible AI brand around it, and embedded their AI across the fire control loop of the US military’s most critical weapons systems. The difference is that nobody is checking.
The DIB said you cannot blame the machine. What they did not anticipate is a world where the companies building the machines would adopt the principles, claim the benchmark, and face no mechanism for anyone to verify whether either claim is true.
Lockheed Martin’s AI moral compass, as they put it, will say a great deal to future generations.
We are already in the future they are building.



