Why I Switched From OpenClaw to NanoClaw
I never updated OpenClaw. Set it up, built a bunch of stuff on it, and then stopped looking at upstream. Not because it broke. Because the idea of pulling in changes made me uneasy.
As I've last checked, OpenClaw has 334,693 stars, 357 contributors, 9,335 open issues, and 6,151 open pull requests. NanoClaw has 25,330 stars, 44 contributors, 168 open issues, and 401 open pull requests. Both are useful agents. Both use Claude for orchestration. But they come with very different levels of cognitive load.
What Made Me Nervous
I could've pulled upstream at any point. I just didn't. OpenClaw churns through issues and PRs at a pace I couldn't keep up with as a human reading diffs. That's not a knock on the project. It's well-run and well-maintained. It's just a lot of unknowns for software that has access to my Obsidian vault, my Telegram, my blog.
NanoClaw was different. When I pull from upstream, I can read what changed. I can roughly understand why. It's not perfect, but the codebase is small enough that a human can follow it. That gave me more confidence than any review process could.
How They Run
Both platforms use Claude but through different mechanisms. OpenClaw calls the Anthropic API directly. I optimized it over time to use Haiku for the main orchestrator and delegated heavier work to Claude Code tmux sessions. Started with Opus for initial setup and improvements, then dropped to Haiku for day to day use.
NanoClaw takes a different route. It uses the Claude OAuth token for the main orchestrator and delegates to Claude Code tmux sessions for the heavy lifts.
I could've gone the OAuth route on OpenClaw. The code supports it. But there was so much FUD about OAuth tokens getting users banned that I didn't want to risk it. Didn't matter if it was true or not. Too much noise, not enough clarity. NanoClaw's docs were straightforward about it. And the codebase was small enough that I could read what the token was actually doing. That made me comfortable going that route.
Same Stuff, Twice
The interesting part is that I built basically the same things on both platforms. Same patterns, same capabilities, different foundation.
On OpenClaw I had voice transcription via local whisper.cpp, Obsidian notes integration for persistent memory, a draft pipeline that goes from voice note to blog post to X thread, speckit and Simpsons loops for spec-driven development, Claude Code sessions with tmux lifecycle management, and Telegram as the primary channel.
On NanoClaw, all of that transferred. Voice transcription, Obsidian integration, draft pipeline, Simpsons loops, scheduled tasks. Then I kept going. fal.ai image generation for blog headers. Ghost publishing with upsert so I stop creating duplicate drafts. An Obsidian write queue so the container and host don't step on each other.
Some things actually got better on the simpler foundation. Less time fighting infrastructure, more time on the actual feature.
The OS Part
Both platforms use the same software engineering practices. And those practices aren't new. They're borrowed from operating systems.
Shared memory with locking. The container and the host both mount the Obsidian vault. It's just markdown files. But you can't have both writing at the same time or you get conflicts. So all writes go through a sequential queue. Same problem operating systems solved decades ago with mutex locks.
Queue-based messaging. Messages come in from Telegram, Slack, wherever. They go into a queue and get processed one at a time per group. No race conditions, no dropped messages. Standard producer-consumer pattern.
Single-purpose, reusable capabilities. Skills in NanoClaw, tools in OpenClaw. Each one does one thing. Voice transcription doesn't know about Obsidian. The draft pipeline doesn't know about image generation. They compose because they're isolated. Unix philosophy.
Container isolation. NanoClaw runs every agent session in its own Linux container. It can only see what's explicitly mounted. Bash access is safe because it's sandboxed. OpenClaw doesn't start with containers out of the box. I didn't run it that way. But the isolation principle is the same one cloud infrastructure has used for years, applied to personal AI agents.
State syncing across devices. I talk to the agent from my phone over Telegram, from my desk in Claude Code, from my laptop wherever I am. Same agent, same memory. The Obsidian vault and SQLite database are the source of truth. Everything else syncs from there.
Cognitive Load
OpenClaw and NanoClaw are both good. They share the same core patterns because those patterns are the right ones regardless of platform. The difference is how much you have to hold in your head.
OpenClaw gives you more out of the box. Native mobile apps, 23+ channel integrations, a web UI, hundreds of contributors pushing things forward. If you need that, it's the better choice.
NanoClaw gives you less. Fewer contributors, fewer unknowns, a codebase you can read in an afternoon. When something breaks you can find it. When you pull upstream you can follow it.
I wanted the one I could understand. The patterns are the same either way.