Spacetime AgentsSpacetime Agents
Back to Blog

The Department of War’s AI problem (and why Anthropic matters)

Haven Vu, Founder & CEO of Spacetime||4 min read
Ink-style illustration for the blog post: US DoD AI vs China Anthropic dispute

TL;DR

The US military’s AI race with China isn’t mainly a model race. It’s a constraints race: data access, compute, procurement, legal authority, and who gets to set the guardrails. The Pentagon’s dispute with Anthropic over autonomous weapons and mass domestic surveillance is a preview of what regulated enterprises will eventually fight about too.

We renamed the Department of War because the mission sounded ugly in public. The incentives never left.

The paperwork goes back to the National Security Act of 1947, when “War” got administratively softened into a new defense establishment (National Security Archive).

So it’s fitting that “Department of War” is back in circulation right as AI becomes a first-class input to how states see, decide, and act (NPR).

If you build AI for regulated customers, this story is not “defense tech.” It’s a preview of your future contract negotiations.

The Problem

The public debate keeps collapsing into morality plays: AI good, AI bad, killer robots.

The operational problem is simpler. Once a frontier model sits across workflows, whoever controls its constraints controls what the organization can do.

That’s why the Pentagon’s dispute with Anthropic matters. It turned “terms of service” into a geopolitical object.

What’s actually new

Militaries have used math to target things for a long time. What’s new is where the math lives.

Frontier models are now general-purpose enough to sit on top of messy, human workflows. They triage video feeds. They summarize intelligence reports. They draft operational plans and route analysts. They even write the procurement language that buys the next system.

DoD has been reorganizing to make that possible. The Chief Digital and AI Office (CDAO) is the umbrella for data, digital, and AI efforts (ai.mil, CRS).

DoD also has written down AI principles like “traceable” and “governable.” This reads like a memo until you remember it becomes the spec after something fails (DoD memo PDF).

And programs like Replicator are explicit about scale. They’re about fielding lots of autonomous systems fast, not admiring a demo (DIU, CRS).

US vs China: same game, different constraints

The shared goal is to compress sensor → decision → action. The real story is the bottleneck.

For the US, the bottleneck is structural. We have slow procurement, heavy oversight, and a fragmented mess of legacy data systems. This fragmentation prevents quiet, irreversible mistakes but kills iteration speed.

China faces a different ceiling. While the state can align faster—their 2017 national AI plan reads like a blueprint for coordinated advantage (DigiChina translation)—they are capped by hardware. US export controls on advanced chips effectively throttle the compute ceiling (BIS, CRS).

Practical translation: China can align faster inside the state. The US can build better tools in the private market. Both sides hit walls that have nothing to do with prompts.

The Anthropic dispute: the real fault line

Based on reporting and Anthropic’s own statement, the core dispute was about two red lines: use in fully autonomous weapons and mass domestic surveillance (NPR, Anthropic).

I’m not in the room. I don’t know the exact contract language. But the fault line is obvious: the government wants “any lawful use.” The vendor wants veto power over categories of use.

Lawfare argues the “supply chain risk” move runs into legal and practical problems (Lawfare). Others report Anthropic plans to fight it (Axios).

This tension will repeat anywhere AI touches sensitive data and high-stakes decisions.

What founders/operators should infer

This dispute changes the roadmap for anyone building in high-stakes verticals.

Treat vendor terms as part of your compliance stack. If you sell into government or healthcare, assume you will negotiate “allowed uses.” This isn’t just a Pentagon problem—expect similar friction if you touch HIPAA data or FedRAMP environments.

Auditability must come before features. You need to know exactly who ran the prompt. You need to see the data they fed it. You need to audit the output it generated. If you can’t trace that chain, you’re gambling with your customer’s job.

Customers will eventually demand configurable constraints. They want hard controls without rewriting your system.

The export-control arc proves that models are not interchangeable commodities (BIS). Treat your provider like a strategic dependency, not a utility.

Org charts and checkbooks always move before the data plumbing catches up.

What to do next

If you’re building AI for regulated environments, copy the military mindset without the weapons: bounded capabilities, clear human responsibility, logs, and an off switch.

The winning product isn't the one with the highest benchmark. It's the one that survives a Congressional hearing.

Sources

OpenAI announces Pentagon deal after Trump bans Anthropic (NPR) — Dispute overview and the two contested red lines. Anthropic statement on Secretary of War comments — Company framing and legal stance. Pentagon’s Anthropic designation won’t survive first contact with legal system (Lawfare) — Legal analysis and implications. CDAO official site (ai.mil) — DoD’s digital and AI organization. Replicator Initiative (DIU) — Program framing and goals. China’s 2017 AI plan (DigiChina translation) — Primary-source translation. Commerce strengthens export controls restricting China’s advanced semiconductor capability (BIS) — Official export-control framing. U.S. export controls and China: advanced semiconductors (CRS) — Congressional Research Service summary. National Security Act of 1947 (National Security Archive) — Historical context for “War” → “Defense” bureaucratic reframing.

Frequently Asked Questions

I reply to all emails if you want to chat:

Get AI automation insights

No spam. Occasional dispatches on AI agents, automation, and scaling with less headcount.