Editor’s note:
This piece forms part of my deeper series on systems of control. Each review explores not just what a book says, but what it reveals about how our institutions are adapting to automation. Anthony King’s AI, Automation, and War is less a book about conflict than about governance. It studies how command, accountability and risk are being rewritten by code.
What the book is really about
At first glance, this looks like another study of killer robots. It is not.
King’s argument is quieter, but more dangerous.
He shows that AI is not automating warfare; it is re-wiring it. The future battlefield is not a place where machines replace humans, but one where human judgement is mediated through layers of data, prediction and simulation. Commanders no longer see the world directly. They see a model of it.
That distinction matters. In King’s hands, AI is not a weapon but an epistemology. It changes what can be seen, what counts as evidence, and how decisions are justified.
This is not the fantasy of fully autonomous drones deciding who lives or dies. It is the slower, more insidious process by which decision systems become infrastructural, embedding themselves so deeply that nobody can see where judgement ends and automation begins.
King’s message is not that humans will lose control to AI. It is that control is becoming a shared illusion, maintained by systems that appear transparent because they are efficient.
Keep reading with a 7-day free trial
Subscribe to Discarded.AI to keep reading this post and get 7 days of free access to the full post archives.


