Spacetime AgentsSpacetime Agents

Blog

Opinions, insights, and dispatches on AI agents, automation, and building the future.

Black-and-white editorial illustration of a mechanical memory-sorting machine reaching through a crowded maze of folders, wires, and paper fragments as the wrong memories clog the foreground and one fragile shard is pulled into the light.

Why Agent Memory Still Sucks — And What Actually Works

Most agent memory systems do not fail because they cannot store information. They fail because they retrieve the wrong thing, keep stale facts alive, compress away nuance, and cannot decide what matters right now.

Ink-style illustration for the blog post: OpenClaw changes computing

OpenClaw is going to change computing forever

If you’re a growing business, the bottleneck is operational throughput with the headcount you already have. OpenClaw is a local-first personal assistant you can operate like a service today, and it points to the next arc: agents becoming a managed enterprise service with approvals, audit logs, and role-based access. The control plane is the interesting part, not the model.

Ink-style illustration showing a polished demo robot on a pedestal contrasted with messy production reality

Why 95% of AI Pilots Fail (And How to Be in the 5%)

AI pilot failure usually comes from readiness, governance, and measurement gaps. Use this checklist to move pilots to production.

Ink-style illustration for the blog post: US DoD AI vs China Anthropic dispute

The Department of War’s AI problem (and why Anthropic matters)

A blunt breakdown of US DoD AI vs China’s AI push, and what the Anthropic dispute reveals about who controls AI guardrails.

Ink-style illustration for the blog post: why AI agents fail engineering teams

Why Most AI Agents Fail (And What Engineering Teams Do About It)

Most AI agents fail in production due to vague success criteria, messy inputs, weak guardrails, and no observability. Here’s what to do instead.

Ink-style illustration for the blog post: US Department of War AI vs China

The US Department of War is adopting AI. China already did.

The problem If you build AI, you are already building for power. Power is not a vibe. It is who gets to decide what happens next.

Ink-style illustration for the blog post: LLM cost optimization playbook

LLM cost optimization 2025: cut inference spend safely

LLM cost optimization in 2025 is mostly an engineering discipline: measure cost per successful outcome, then apply caching, routing, batching, and quantization.

Ink-style illustration for the blog post: model context protocol enterprise guide

Model Context Protocol enterprise: what MCP changes

Model Context Protocol (MCP) standardizes how AI systems connect to tools and data. Here’s how to pilot MCP without creating a security mess.

Ink-style illustration for the blog post: AI marketing automation problems

AI Marketing Automation: Why your marketing gets zero traffic

AI marketing automation fails when data is shattered, errors ship, and governance is missing. Here's a practical playbook to fix it without a stack rebuild.

Ink-style illustration for the blog post: the death of SaaS AI contracts

The Death of SaaS? AI Is Forcing a New Buy vs Build Playbook

Seat-priced, UI-first SaaS is getting repriced as agents do more of the clicking. Keep systems of record. Build a thin AI layer where your workflow is your advantage. Stop renting labor; start renting databases.

Ink-style illustration for the blog post: AI agent production failures

AI agent production failures: why 85% fail and how to fix

AI agent production failures are usually evaluation and observability failures. Here’s a practical reliability checklist you can implement this week.

Ink-style illustration of a coordinated team of AI agents (an AI army)

Why We Build AI Agent Armies, Not AI Tools

A blueprint for deploying AI agents as a team: roles, orchestration, guardrails.

AI adoption resistance: why employees won’t use AI tools (and what to do instead)

AI adoption resistance: why employees won’t use AI tools (and what to do instead)

Overcome AI adoption resistance by making AI usage safe, specific, and worth someone’s time: pick 2–3 workflows with clear owners, allocate protected pilot time, provide guardrails and examples, and measure outcomes tied to the job. Most teams fail because they roll out a tool without changing incentives, risk, or process. Adoption is a design problem, not a motivation problem.

HubSpot Salesforce integration issues: how to stop sync errors, loops, and broken automations

HubSpot Salesforce integration issues: how to stop sync errors, loops, and broken automations

Fix HubSpot–Salesforce integration issues by defining source-of-truth rules per object, tightening field mappings, and testing in sandboxes before turning on bidirectional sync. Most “sync errors” are predictable: conflicting required fields, duplicate handling, and workflow loops. Add monitoring and rollback so integrations can fail safely without corrupting your CRM.

Improve RAG performance: how to fix RAG retrieval accuracy when it pulls the wrong docs

Improve RAG performance: how to fix RAG retrieval accuracy when it pulls the wrong docs

To improve RAG retrieval accuracy, stop guessing and start measuring: build a small eval set, track recall@k, and inspect failed queries. The highest-impact fixes are usually hybrid retrieval (BM25 + vector), better chunking + metadata, domain-appropriate embedding models, and a reranker. Most “RAG hallucinations” are retrieval failures, not model failures.

LLM API cost overruns: how to prevent an unexpected OpenAI bill

LLM API cost overruns: how to prevent an unexpected OpenAI bill

Prevent LLM API cost overruns by separating keys per environment, enforcing per-feature budgets, and instrumenting token + request usage with alerts. The biggest savings usually come from semantic caching, tighter context windows, and “stop the bleeding” guardrails like rate limits and max tokens. Most surprise bills happen because org controls are missing, not because the model is expensive.

AI agent maintenance: why your agents break in 90 days (and how to prevent it)

AI agent maintenance: why your agents break in 90 days (and how to prevent it)

AI agent maintenance means treating agents like production software: version your prompts/tools, monitor every run, write contract tests for APIs and data formats, and schedule regular model + dependency reviews. Most “it broke” incidents come from silent upstream change, not your logic. Build for observability and rollback from day one.

The rise of open-source tools — and why AI makes customization the default

The rise of open-source tools — and why AI makes customization the default

AI pushes value into integrations, policy, and custom logic. Use this rubric to pick SaaS vs open source vs build, plus a 3-step playbook.

5 Signs Your Business Is Ready for an AI Workforce

5 Signs Your Business Is Ready for an AI Workforce

Not every company needs AI agents today. But if you recognize these five patterns, you are leaving money on the table by not deploying them.

Get AI automation insights

No spam. Occasional dispatches on AI agents, automation, and scaling with less headcount.