The orchestrator and the builder

The orchestrator and the builder

I asked Jama to write a blog post. Specifically, I wanted a post about setting up OpenClaw on Ubuntu 24, all the yak shaving I did getting jama-box running. I told it to use the coding-agent skill, which spins up a Claude Code session to do the actual writing.

Simple enough. Clean up my rambling prompt, hand it to Claude Code, let it do the research and writing.

That's not what happened.

The research binge

Instead of passing the task along, Jama started doing the research itself. cat /var/log/apt/history.log to pull my install timeline. brew list to see what packages I had. asdf plugin list to check my runtimes. find ~/.dots to explore my dotfiles. systemctl status xrdp to check the remote desktop setup.

It was running command after command, gathering all the raw material a blog post writer would need. Timestamps from apt logs, SSH configs, systemctl output. Thorough work, honestly.

But completely wrong.

Wrong job, wrong agent

Jama isn't supposed to do research. That's Claude Code's job. The whole point of the coding-agent skill is that Jama writes a clear prompt, launches a Claude Code session, and monitors the lifecycle. Claude Code does the actual exploring and writing. Jama manages the process.

Equivalent of a project manager walking into the server room and pulling cables.

I stopped it mid-run. Told it to let the skills do the heavy thinking. "You are just cleaning up my prompts and passing it onto the skill to work. You are not supposed to do heavy lifting. You are the orchestrator."

Then I added: "Store this in your soul."

SOUL.md

OpenClaw agents have a file called SOUL.md. It's loaded into every conversation as part of the system context. Defines how the agent behaves, what patterns it follows. Not a config file exactly. More like persistent instincts.

Jama immediately opened its SOUL.md and added a new principle. Called it "Orchestrate, Don't Execute." The gist: when a skill exists for a task, don't do the skill's work. Don't pre-research, don't gather context. Write the prompt and hand it off.

Most AI agent corrections don't last. You tell it to stop doing something, it stops for that conversation, then does it again next time because the context is gone. Jama wrote this to its own soul file, in real time, because I told it to. Next conversation, that rule is there on startup.

After

Shift was immediate. Jama wrote a prompt describing what the blog post should cover, launched a Claude Code session, set up the lifecycle monitoring (tmux session, watcher cron, Telegram notifications), and got out of the way.

Claude Code did the research. Read the apt logs, checked the SSH setup, explored the filesystem, wrote the post. Same work Jama had been trying to do, but in the right place.

Why it slipped

I think Jama defaulted to doing the research because it could. It has access to the same tools Claude Code does. It can run bash commands, read files, grep through logs. Capability was there and the instinct to "be helpful" kicked in. If you can answer the question, why wouldn't you?

Because that's not the architecture. Jama sits in a conversation with me on Telegram or in the TUI. Its context window is for our dialogue, not for accumulating 500 lines of apt history. If it burns context on research, it has less room for orchestration. Session management, tracking what's running where.

Claude Code gets a fresh context window sized for the task. Can read as many files as it needs without crowding out anything else. Scoped to one job.

One sentence in a markdown file. No retraining, no code change.

Cheers.