Spacetime AgentsSpacetime Agents
Back to Blog

The US Department of War is adopting AI. China already did.

Haven Vu, Founder & CEO of Spacetime||4 min read
Ink-style illustration for the blog post: US Department of War AI vs China

The problem

If you build AI, you are already building for power.

Power is not a vibe. It is who gets to decide what happens next.

That is why the US Department of Defense will keep buying AI even if every press cycle turns into a moral panic.

And it is why China will keep pushing AI into its military stack even if it makes the outside world uncomfortable.

What’s happening

The US defense world is finally treating AI like infrastructure instead of a demo.

Not just chatbots. Real systems.

The kind that triage intelligence, fuse sensor data, plan logistics, and speed up targeting workflows.

You can already see it in public artifacts.

Citations needed:

  • Example 1: DoD AI org or program and what it does. Link: [TODO from Hunter brief]
  • Example 2: A named DoD task force or policy memo relevant to AI use. Link: [TODO from Hunter brief]

On the China side, the direction is also not subtle. They have been public about “intelligentized” warfare for years.

Citations needed:

  • Example 3: A China or PLA doctrine reference or government statement. Link: [TODO from Hunter brief]

Incentives make it inevitable

People talk about “whether” we should use AI in war.

That question misses the mechanism.

If one side uses AI to shrink decision time from hours to minutes, the other side has to respond.

Not because they love AI.

Because they love not losing.

That is the ugly part about arms races. They punish restraint.

They reward capability.

You can wrap this in ethics language, but the incentives do not care.

US vs China: how adoption actually happens

The difference is not that one country is good and the other is evil.

The difference is the shape of their institutions.

The US

The US adopts through contracts.

That means procurement cycles, requirements documents, compliance frameworks, and vendors.

It also means oversight.

A lot of it.

That slows things down.

It also creates a paper trail, which matters when the system fails.

And it will fail.

It also creates a market.

A real one.

If you can clear the hurdles, you can build a durable business.

Citations needed:

  • Example 4: A public US DoD contract award, pilot, or program using LLMs or AI. Link: [TODO from Hunter brief]

China

China adopts through mandate.

When the system decides something is strategic, it can push a coordinated build across industry, labs, and the military.

The speed advantage is obvious.

The tradeoff is opacity.

You do not get the same public accountability.

You also do not get the same vendor market dynamics.

It is closer to a national stack.

Citations needed:

  • Example 5: A public statement or initiative connecting AI to defense modernization. Link: [TODO from Hunter brief]

Where the Anthropic dispute fits

The Anthropic controversy is not “AI drama.”

It is the inevitable friction between 3 forces.

1) Governments want capability.

2) Companies want revenue and influence.

3) Employees and the public want boundaries.

When an AI lab touches defense work, every one of those forces shows up.

In public.

With receipts.

Citations needed:

  • Anthropic dispute summary and the primary source link. [TODO from Hunter brief]
  • Any relevant statements from Anthropic, the DoD, or reported partners. [TODO from Hunter brief]

My contrarian take is simple.

The only stable “ethics” here is system design.

Policies change. Leadership changes. Public pressure spikes and fades.

But logs, audit trails, clear human approval points, and hard constraints survive.

So the conversation should focus less on virtue and more on architecture.

If you’re a founder

You do not need a geopolitical hot take.

You need a plan.

Here are 3 moves I would make.

1) Build with provenance as a first-class feature

Every output should have traceable inputs, model version, and a reason chain that can be reviewed.

If you cannot explain what happened, you will not survive procurement.

2) Ship human override points that are real

A “human in the loop” label is worthless.

Define the decision gates.

Make it impossible to bypass them without leaving evidence.

3) Pick your customer based on stomach, not hype

Defense budgets are real.

So are the reputational and operational risks.

If you do not want to touch this space, do not.

But if you do, treat it like regulated infrastructure from day 1.

Suggested hero image alt text

A split image showing US and China military silhouettes overlaid with circuit traces and data streams.

5 potential tweet hooks

1) War is an incentive machine. AI is a compounding advantage machine. Of course they are colliding.

2) The US will buy defense AI through procurement and paperwork. China will roll it out through mandate.

3) The real ethics debate in defense AI is not a statement. It is logging, auditability, and override points.

4) The Anthropic dispute is a preview of every AI lab’s future. Values, contracts, national security. Same collision.

5) If your AI product cannot survive an audit trail, you do not have a defense product. You have a demo.

I reply to all emails if you want to chat:

Get AI automation insights

No spam. Occasional dispatches on AI agents, automation, and scaling with less headcount.