Dan Lorenc recently started talking about his LLM orchestration experiment dubbed multiclaude . I’ve used it a bit and thought others who were looking at it might want to know how it’s different.

How it feels#

Multiclaude is similar to Gastown in that it’s a distributed go binary that will help you launch and manage a pile of claude code sessions in tmux. Its vibes are much more “unix tool” and not “I’m a mad mage peering through my orb”.

The idea here is you launch the service in the background via the CLI. You issue commands to it via the CLI to add repositories you care about. You can then dispatch work to the workers via the CLI. It’s very CLI and, to me, that’s a good thing.

When you’re ready to look at what’s going on, you can have the CLI hook you into the running tmux sessions. One per repository. What you’ll see is a supervisor, a merge-queue, and any active workers.

  • The supervisor knows about the work the workers are doing. It periodically checks in on new work and spins up those workers as needed.
  • The workers do their work until they’re satisfied it’s a pretty good first pass. They’ll make a pull request for it, then they self-destruct.
  • The merge-queue then kicks in. It will monitor the open PRs for that repository. If the build is green… it merges it. If it’s broken, presumably the merge queue will spin up a new worker if it fails CI. I haven’t had this happen yet.

The supervisor and merge-queue will periodically check for more work to do.

One of the big differences from Gastown is that Gastown has a monolith mayor that you interact with and that mayor manages the per-repo infrastructure. It controls the whole world of interaction. In multiclaude, you get little repo-centric “neighborhoods” of claudes presided over by the supervisor. It makes the changes happening feel more local. For the work I do, that feels right-sized.

My take#

I like it! The CLI nature of the tool is very unobtrusive. It allows you to largely ignore the sausage being made. “Gimmie this feature plz” and you get a PR that’s green on CI, then merged.

I have a few minor gripes.

The workers disappearing as soon as they finish work is a little disconcerting. It makes me feel like I’ve lost what it was they really did. I think this probably is a nod to needing more auditability of what an LLM did when you get a PR, not something specifically wrong with what this tool is doing.

If you let the robot sit in the background, it polls for more work. I have a concern about it wasting tokens checking for work to do. This is largely driven by how precious I feel about my tokens. Hitting the limit suuuucks and I’ve not come up with/investigated alternatives quite yet.

By design, it puts a tremendous amount of reliance on your CI system. You need linters, code coverage minimums, etc, to cover what you want to drive the robot to do. This means approximately serializing your AGENTS.md file into CI rules. This is probably a net win for CI providers and people who want consistency in their build/deploy processes… but it’s a step function improvement that’s demanded that I don’t think most folks are prepared for.

I’m still in experimentation mode, but this feels worth spending a little time on to see a different perspective on agent orchestration.