TL;DR
Anthropic sued the Pentagon last week. Two AI companies raised a billion dollars each. One of them has a product. Meta is considering cutting 16,000 people. Oracle may cut 30,000. GPT-5.4 arrived and got on with it.
The War in Iran focused the eyes on AI in the battlefield.
Contents
This Week
Anthopic and DoW: The language model enters the courtroom
LeCunn and Fie-Fie Li: A billion dollars each. Only one receipt.
The glasses are watching.
What the tech bros are doing
Model releases
Reading - Army of None
Person to follow - Stacie Pettyjohn
This week
The line Dario Amodei would not cross A billion dollars each. Only one receipt. What world models actually are, and why everyone suddenly wants one The layoffs that AI is paying for GPT-5.4, Gemini Flash Lite, and DeepSeek’s empty chair
Anthopic and DoW: The language model enters the courtroom.
On 9 March Anthropic files two lawsuits. One in the US District Court for the Northern District of California. Another in the DC Circuit Court of Appeals.
The claim is simple but unusual. The Department of War labelled the company a supply chain risk.
Anthropic says the designation is retaliation for speech protected by the First Amendment.A phrase used for hostile states is now applied to a domestic AI laboratory.
The case moves quickly.
A federal judge in San Francisco advances the preliminary injunction hearing to 24 March, noting that the dispute carries consequences for both sides. During the exchange the government’s lawyer is asked whether any further action against the company will occur before the hearing.
He cannot offer that assurance.
The arguments begin to assemble around the case.
Thirty seven researchers from OpenAI and DeepMind sign a brief supporting a competitor.
Their warning is not about Anthropic alone. They argue that a supply chain designation could damage American competitiveness and discourage public debate about the risks of artificial intelligence.
Microsoft files separately. The company describes a different concern.
Military systems have already been built around Anthropic’s technology. Removing that component, they argue, would produce severe economic disruption.
Another group appears. Retired military leaders and former intelligence officials, including Michael Hayden, who once directed the CIA.
Their argument is institutional rather than technical. They say the conduct of the Defense Secretary threatens principles of the rule of law that have long governed the American military.
A strange coalition forms. Rival laboratories. Software companies. Former intelligence chiefs.
All orbiting the same question.
When artificial intelligence becomes part of the military supply chain, who decides the limits of its use?
The answer will not come from the laboratory.
The hearing is set for 24 March.
LeCunn and Fie-Fie Li: A billion dollars each. Only one receipt.
Yann LeCun left Meta, moved to Paris, and raised $1.03bn. Nvidia backed it. So did Jeff Bezos, Temasek, Samsung. The company is called AMI Labs. It is less than six months old. No product. No revenue. No timeline for either. The CEO said it could take years before the research produces anything commercial.
They still got the billion.
Three weeks earlier, Fei-Fei Li’s World Labs raised the same amount. Autodesk put in $200m. AMD, Nvidia, Fidelity came in alongside. World Labs has a product called Marble. It builds editable 3D environments from a text prompt or an image. Free and paid tiers. It has users.
The coverage was not the same. You couldn’t miss one and probably didn’t hear of the other.
Both companies are building world models. AI that learns by observing how physical reality behaves, not by consuming language. The argument is that text alone cannot produce genuine intelligence. The world has geometry, physics, time. A model trained only on words will never really understand any of those things.
LeCun has been making this argument for years. He is probably right. Whether being right and having a billion dollars is enough is a different question.
Fei-Fei Li is not waiting to find out. She has already built something.
You may have noticed whose raise made the front pages.
From Discarded.AI: Link
The glasses are watching.
Meta sold seven million pairs of Ray-Ban AI glasses in 2025. The marketing said: designed for privacy, controlled by you.
An investigation published on 27 February by two Swedish newspapers, Svenska Dagbladet and Göteborgs-Posten, found something else.
Footage captured by the glasses moves through servers in Luleå and Denmark and arrives in an office in Nairobi, where workers employed by a subcontractor called Sama open it and label what they see.
One annotator described the work simply. We see everything, they said. From living rooms to naked bodies.
Workers described footage of people using the toilet. People undressing. One account of a man leaving his glasses on a bedside table, his partner walking in and changing, unaware. Bank cards. Intimate conversations. The blurring of faces, which Meta says is automatic, does not always work. Particularly in low light.
The workers operate under confidentiality agreements they cannot break without losing their income. Personal phones are not permitted in the office. Cameras watch the room. You are not supposed to ask questions, one worker said. If you start asking questions, you are gone.
A class action lawsuit was filed in San Francisco days after the investigation published. The UK’s Information Commissioner’s Office wrote to Meta requesting answers.
Meta’s response referred journalists to its terms of service.
[Link to Svenska Dagbladet investigation] [Link to class action lawsuit]
Your words here, in this register. A fact. A gap. Another fact at an odd angle to the first.
What the tech bros are doing
Two companies. Two sets of numbers.
Meta is considering cutting up to 20% of its workforce. Around 16,000 people. No date. No final number. A spokesperson called the Reuters report speculative reporting about theoretical approaches.
Zuckerberg is spending $600bn on data centres by 2028.
Oracle announced cuts of between 20,000 and 30,000 jobs. The same week it posted strong earnings. The stock went up. The company is carrying $108bn in debt. Raised to fund a $50bn data centre buildout.
Someone has to pay for the infrastructure.
Model releases
GPT-5.4: OpenAI, 5 March. Stronger at agentic work, tool use, and coding. Replaces GPT-5.2 Thinking for paid users.
Gemini 3.1 Flash Lite: Google. 2.5 times faster than earlier versions. $0.25 per million input tokens.
DeepSeek V4: Still not here. Expected multimodal. Every predicted window has passed without a release.
Reading
Scharre spent years inside the Pentagon before he wrote this. The question he kept asking was a simple one.
When a machine decides to kill someone, who is responsible.
Not in the legal sense. In the human sense.
The book was published in 2018. It has not dated.
Person to follow
Stacie Pettyjohn, linkedIn: CNAS.ORG
Senior Fellow and Director of the Defence Program, Center for a New American Security.
She spent a decade at RAND before joining CNAS. Her subject is how wars are actually fought, as opposed to how they are planned.
This month she co-authored the Hellscape report. A proposal for Taiwan to defend itself using dense swarms of autonomous drones. Without waiting for American forces to arrive.
She is also writing on the Anthropic dispute.
Two threads. One question.
Find her at cnas.org
Events
Next week I start my Bluedot.org: Technical Safety course
If this newsletter is useful, the Wednesday edition goes further. The analysis behind the stories, the context that did not fit here, and the things that did not make the free edition. Paid subscribers get that every week.


