One Brain, Two Environments
I asked Claude Code on the host "what are my todos?" and it guessed. Searched around, looked at some files, came up with something plausible but not quite right.
I had the container agent open on my phone at the same time. Same question. Clean list, correct items, proper formatting. The Obsidian skill knew exactly how to find the vault, parse TODO.md, return everything. I'd been using it from the container all week.
Two environments, same brain, different answers. That's not a quirk. That's broken.
Quick context if you're not familiar with how NanoClaw works. The host is Claude Code running directly on my Mac. I open a terminal, type a question, it reads my files and runs commands on my actual machine. The container is a separate Linux environment that runs inside a virtual machine. When I message NanoClaw from my phone, it spins up a container, gives it access to specific directories via bind mounts, and runs Claude in there. Both are Claude. Both have access to the same project. But they're separate processes with separate configurations.
Want to do the same work from both. Ask about todos from my desk or from my phone and get the same answer. Draft a blog post from either place. Check what's scheduled. If I can do it from the host, I should be able to do it from the container, and vice versa. The inconsistency isn't a nice-to-have problem. It means I can't trust either environment to give me the full picture.
The todo skill existed. It just lived under container/skills/ and the host had never seen it.
Two Directories, Same Problem
NanoClaw runs two environments. Skills lived in both, separately.
Host skills in .claude/skills/. Container skills in container/skills/. Twenty-four on the host, seven in the container.
Every new skill meant deciding where it lived. Sometimes I'd add it to both and maintain two copies. Sometimes I'd put it in one place and forget the other environment existed. The obsidian-notes skill, the markdown reference, the draft workflow — container only. The setup skills, the channel integrations — host only. Nothing checked whether they matched.
The fix was almost embarrassing once I saw it. Both environments already have access to the same filesystem via bind mounts. Just put the skills there.
One canonical skills/ directory at the repo root. .claude/skills/ becomes symlinks pointing at it. Claude Code finds skills the way it always has, just following links. Container gets the same directory via its existing bind mount.
Not everything gets shared. Setup scripts and channel authentication stay host-only in .claude/skills/ as actual directories, not symlinks. The container never sees them. Everyday skills like the Obsidian tools and the draft workflow live in the canonical directory and both environments pick them up.
Same principle applied to project context. Root CLAUDE.md was duplicated with overlapping content. A correction on one side stayed on that side. Now the root file is bind-mounted into containers. Both read the same one. Group-specific stuff (persona, communication style) stays in each group's own CLAUDE.md.
The Database I Already Had
Ghost draft mappings track which thesis directory maps to which Ghost post ID, so re-running the draft skill updates the existing post instead of creating a duplicate. Simple data, but I'd stored it in SQLite with a layer of IPC sync on top. Seeding on startup, syncing on shutdown, polling in between. More moving parts than the problem needed.
The Obsidian vault was already mounted everywhere. A markdown table is a perfectly fine data format for this. Human-readable. Editable in Obsidian. Shows up when you browse the vault. Doesn't require a client to inspect.
Ghost drafts moved to a markdown file in the vault's NanoClaw folder. Removed the seed functions, the sync functions, the IPC polling. One less thing talking to the database.
The Write Queue
Markdown files aren't transactional. Two processes writing at the same time can corrupt a file or lose one of the writes. This wasn't theoretical — a scheduled task and a container IPC sync can fire at the same second.
The write queue is host-side only. All vault writes go through it sequentially. Containers don't write to vault files directly. They drop IPC files and the host processes them one at a time through the queue. Host-internal writes (digest generation, state sync) go into the same queue.
Each write is atomic: write to a .tmp file first, then rename to the target. A reader never catches a partial write. If the process dies mid-write, the tmp file is orphaned and the original is intact. Simple.
Reads don't go through the queue. The atomicity of each write means a reader either sees the old file or the new one, never a half-written one.
Not Everything Fits
I tried putting tmux session state in the vault too. Simpsons loops (background Claude sub-agents) track active sessions so the host knows what's running and can clean up dead ones. Sessions start and stop fast. A loop finishes, cleanup check fires two seconds later, reads the vault file. Except the write queue hasn't flushed yet. The session exists in memory, exists in SQLite, but the markdown file still shows old state. Stale read. Cleanup thinks the session doesn't exist.
Not a theoretical race condition. It happened.
Sessions need synchronous reads. Start a session, immediately read it back, make decisions on that read. The write queue adds latency that's fine for ghost draft mappings but breaks for something that changes every few seconds.
So sessions stayed in SQLite. But they still get a vault projection. After every SQLite write, a mirror function rebuilds tmux-sessions.md and pushes it through the write queue. The digest reads it. Container agents read it. From the vault it looks like any other markdown table. Nobody knows the real data is in a database.
That's when the heuristic clicked. New state starts in markdown. If the access pattern needs synchronous reads or frequent mutations, promote to SQLite. Either way, project to the vault. Start simple, escalate when friction shows up.
What This Unlocked
Once I had both stores sorted (markdown by default, SQLite for the hot path), the rest fell into place.
NanoClaw has a daily digest. It's a scheduled task that runs every morning, collects what happened in the last 24 hours, and sends a summary to my phone. Open todos, active container sessions, scheduled task history. Before the vault was shared state, building this would've meant more sync between SQLite and the container. Now the digest just reads markdown files that both environments write to, so the summary always has the full picture.
The todos skill is a shared todo list. I can say "add a todo to the NanoClaw backlog" from my phone and the container agent writes it through IPC. The host processes it through the queue and it shows up in my Obsidian vault as a markdown table with categories and statuses. I can also edit it directly in Obsidian or ask Claude Code on the host. Next morning's digest picks it up either way.
Active container sessions still go straight to sessions.md in the vault. Scheduled tasks and their run history live in SQLite because they get polled every minute and need indexed lookups. Both get vault projections though. When a scheduled task finishes, a mirror function rebuilds task-runs.md through the write queue. The digest reads from all of them and doesn't know which ones are source of truth and which are mirrors.
From the vault it all looks the same. Markdown tables you can browse in Obsidian. Some are the real data. Some are copies. Doesn't matter.
The Brain Itself
Skills and state were the obvious ones. Memory took longer to notice.
Claude Code has a built-in memory system. .claude/projects/.../memory/ — markdown files where it stores things you tell it to remember. User preferences, behavioral corrections. Works fine on the host. The container had a separate one. A correction from my phone didn't reach the desktop.
Same pattern as everything else. The vault was already mounted. Memory files moved to NanoClaw/memory/ in the Obsidian vault. Both environments read and write the same directory. Behavioral corrections and project context sit in markdown files with frontmatter tags so you can grep for type and severity.
Added a search layer on top. Before acting on any request, the agent runs qmd query with keywords from what you asked. Searches memory files, shared config, the whole vault. Host or container, same search, same results. A correction you give from your phone shows up on your desktop without anyone syncing anything.
The built-in memory folder still exists. One file in it: "Memory has moved to Obsidian."
It Stuck
The heuristic that clicked after the tmux stale-read bug — start with markdown, promote to SQLite when the access pattern breaks — became documented architecture. STATE-MANAGEMENT.md with a state store index tracking every piece of persistent state, which store owns it, and which mirror function keeps the vault projection current. A glossary so both environments use the same vocabulary. Naming conventions for functions: mirrorXxxToVault() for projections, readXxx() for Obsidian-primary state, onXxxChanged() for mutation callbacks.
The kind of thing you write down after you've made the mistake enough times that you don't want to make it again.
Slack is a channel now too. Same skills, same state, same memory. The title says "two environments" but the architecture stopped caring about the number a while ago.
Back To The Todos
I asked Claude Code on the host "what are my todos?" again. Same list I'd get from my phone. Same items, same formatting, same skill. No guessing. Same behavioral memory too — it remembers the same corrections regardless of where I'm talking to it.
The bind mount was already there. The vault was already there. Skills went from two directories to one. State went from sync machinery to markdown files behind a write queue. The few things that needed SQLite got projections so the vault still has the full picture. Most of the work was subtracting things.
Slack is consistent with all of that too, which is the good news. The bad news is the Slack formatting is ugly as hell and I'm going to have to fix it eventually. One thing at a time.