Spacetime AgentsSpacetime Agents
Back to Blog

The Department of War’s AI problem (and why Anthropic matters)

Haven Vu, Founder & CEO of Spacetime||4 min read
Editorial illustration of an AI arms race: opposing autonomous systems and satellites converging toward a central compute core, signaling intense competition and potential warfare.

TL;DR

The US military’s AI race with China isn’t mainly a model race. It’s a constraints race: data access, compute, procurement, legal authority, and who gets to set the guardrails. The Pentagon’s dispute with Anthropic over autonomous weapons and mass domestic surveillance is a preview of what regulated enterprises will eventually fight about too.

We renamed the Department of War because the mission sounded ugly in public. The incentives never left.

The paperwork goes back to the National Security Act of 1947, when “War” got administratively softened into a new defense establishment (National Security Archive).

So it’s fitting that “Department of War” is back in circulation right as AI becomes a first-class input to how states see, decide, and act (NPR).

If you build AI for regulated customers, this story is not “defense tech.” It’s a preview of your future contract negotiations. The Problem The public debate keeps collapsing into morality plays: AI good, AI bad, killer robots, etc.

The operational problem is simpler. Once a frontier model sits across workflows, whoever controls its constraints controls what the organization can do.

That’s why the Pentagon’s dispute with Anthropic matters. It turned “terms of service” into a geopolitical object. What’s actually new Militaries have used math to target things for a long time. What’s new is where the math lives.

Frontier models are now general-purpose enough to sit on top of messy, human workflows: triaging video, summarizing intel, drafting plans, routing analysts, writing procurement language.

DoD has been reorganizing to make that possible. The Chief Digital and AI Office (CDAO) is the umbrella for data, digital, and AI efforts (ai.mil, CRS).

DoD also has written down AI principles like “traceable” and “governable,” which reads like a memo until you remember it becomes the spec after something fails (DoD memo PDF).

And programs like Replicator are explicit about scale. They’re about fielding lots of autonomous systems fast, not admiring a demo (DIU, CRS). US vs China: same game, different constraints The shared goal is to compress sensor → decision → action.

The interesting part is what each side can’t do.

US constraints:

Slow procurement and heavy oversight. Great at preventing quiet, irreversible mistakes. Bad at iteration speed. Fragmented data and authority, plus a long tail of legacy systems.

China constraints:

Strong state direction. China’s 2017 national AI plan reads like a blueprint for coordinated national advantage (DigiChina translation). A compute ceiling shaped by US export controls on advanced chips and related tooling (BIS, CRS).

If you want a practical read: China can often align faster inside the state. The US can often build better tools in the private market. Both sides hit walls that have nothing to do with prompts. The Anthropic dispute: the real fault line Based on reporting and Anthropic’s own statement, the core dispute was about 2 red lines: use in fully autonomous weapons and mass domestic surveillance (NPR, Anthropic).

I’m not in the room, so I’m cautious about pretending I know exact contract language. But the fault line is obvious: the government wants “any lawful use,” and the vendor wants veto power over categories of use.

Lawfare argues the “supply chain risk” move runs into legal and practical problems (Lawfare). Others report Anthropic plans to fight it (Axios).

This tension will repeat anywhere AI touches sensitive data and high-stakes decisions. What founders/operators should infer Vendor terms are now part of your compliance stack. If you sell into government, healthcare, finance, assume you will negotiate “allowed uses.” Build auditability first. If you can’t answer who used the model, on what data, and what it produced, you’re gambling with your customer’s job. Make constraints configurable. Customers will want hard controls without rewriting your system. Treat model providers like strategic dependencies. The export-control arc should have killed the fantasy that compute and models are interchangeable commodities (BIS). Expect narrative-driven budget shifts. Org charts and checkbooks move before data plumbing catches up. What To Do Next If you’re building AI for regulated environments, copy the military mindset without the weapons: bounded capabilities, clear human responsibility, logs, and an off switch.

The winning product is the one that survives the political cycle, not the one that wins a benchmark. FAQ What is the Pentagon’s dispute with Anthropic about? Reporting and Anthropic’s statement describe a conflict over keeping restrictions related to fully autonomous weapons and mass domestic surveillance in a Pentagon deal, versus broader “any lawful use” rights (NPR, Anthropic). Is “DoD AI vs China AI” mainly a model race? It looks more like a constraints race: data access, compute, procurement speed, and legal authority. Model quality matters, but it’s rarely the bottleneck. Why should non-defense companies care? Because the same governance fight appears wherever AI touches sensitive data. Your customer will want control. Your vendor will want limits. Your regulator will want accountability. Sources OpenAI announces Pentagon deal after Trump bans Anthropic (NPR) — Dispute overview and the two contested red lines. Anthropic statement on Secretary of War comments — Company framing and legal stance. Pentagon’s Anthropic designation won’t survive first contact with legal system (Lawfare) — Legal analysis and implications. CDAO official site (ai.mil) — DoD’s digital and AI organization. Replicator Initiative (DIU) — Program framing and goals. China’s 2017 AI plan (DigiChina translation) — Primary-source translation. Commerce strengthens export controls restricting China’s advanced semiconductor capability (BIS) — Official export-control framing. U.S. export controls and China: advanced semiconductors (CRS) — Congressional Research Service summary. National Security Act of 1947 (National Security Archive) — Historical context for “War” → “Defense” bureaucratic reframing.

I reply to all emails if you want to chat:

Get AI automation insights

No spam. Occasional dispatches on AI agents, automation, and scaling with less headcount.