The problem with AI Coding assistants that nobody talks about
Let me describe a day that probably sounds familiar.
You are mid-feature. You have got Claude running, it's editing files, things are moving. You switch to the browser to check something. The app is in a broken state, half the changes are in, half aren't, and you're looking at a UI that belongs to neither version. You open the terminal to investigate. Git status shows a pile of modified files you didn't touch. The AI was mid-edit when you looked away.
So you context switch. Someone pings you about a bug on a different page. You switch branches. Pull latest. Rebuild assets. Set up the environment. Test the thing. Switch back. Spend ten minutes figuring out where you were. Train of thought: gone.
This happens every day for most developers using AI assistants. And nobody talks about it because it's easy to write off as friction, as the cost of using powerful tools, as something you just get used to.
In fact it isn't. It's an architecture problem.
You are sharing one working directory between your brain, your browser, and an AI agent that doesn't know or care which of those is active at any given moment. Something always breaks. The only question is when.
The insight: branches aren't states to switch between, they're environments to run in parallel
Here's the mental shift that changed how I work.
Most developers think of a branch as a state. The codebase is in state A (main), or state B (your feature), or state C (the hotfix). You move between them by switching. One codebase, one running app, one state at a time.
That model is fine when you're working alone on one thing. It breaks completely when you have an AI agent, a browser, and a terminal all sharing that same state simultaneously.
The alternative: what if each branch was its own running environment? Its own directory. Its own app. Its own URL. And you never switched at all, you just opened a different window.
This isn't a new idea. It's exactly how Docker works for services. You don't switch between containers. You run them in parallel. Each one has its own isolated state. Each one is accessible independently. The fact that three of them are running simultaneously is a feature, not a problem.
The same logic applies to branches.
Building block one: the /worktree skill
Git has a feature called worktrees. It's been there for years. Most developers have never used it.
A worktree lets you check out a branch into a completely separate directory. Not a copy of the repo, the git objects are shared, so there's no duplication of history or blobs. Just a separate working directory with its own files, on its own branch, running independently.
That's the primitive. What I built on top of it is a Claude Code skill that turns one command into a fully working dev environment:
/worktree create epic/payments-redesign
Here's what happens in the two minutes after you type that:
✓ Branch created: feat/payments-redesign (from staging)
✓ Directory created: ../app-payments-redesign.test/
✓ .env patched: unique domain, unique app name
✓ Server setup: payments-redesign.app.test (live in browser)
✓ Running packages, commands, and rules complete
✓ Open: http://payments-redesign.app.test
That's it. You have a fully running, browser-accessible copy of your app on its own branch. Claude works in it. Your browser points to it. Your terminal lives in it. Nothing from any other branch touches it.
When the feature ships:
/worktree remove payments-redesign
Gone. Branch deleted, directory removed, worktree unlinked, subdomains cleaned up. One command in, one command out.
Parallel agent worktrees created
Building block two: the /parallel agent
The worktree skill solves isolation. The parallel agent solves scale.
Once you have isolated environments, the natural next question is: why am I creating them one at a time? Why am I working in one at a time?
The /parallel agent takes a list of tasks:
/parallel
1. Add export button to the brands index page
2. Fix the pagination bug on the deals page
3. Add missing tests for CampaignController
And does three things:
- Creates a worktree for each task, full environment setup, all three running simultaneously
- Spawns one AI agent per worktree, all at the same time
- Gets out of your way
You grab a coffee. You come back to review, merge, and deploy. That's your only job now.
How the agent architecture actually works
This is where it gets interesting for people who care about how things are built.
There are two types of agents in this system. They have very different jobs.
The orchestrator is the top-level agent, the one you're talking to when you type /parallel. It has full context: all tasks, all worktrees, all branches. It creates the environments, spawns the subagents, collects the results, and reports back to you. It is the only entity in the system that knows everything.
Subagents are isolated subprocesses. Each one is a completely separate context window, like opening a fresh Claude session that knows nothing about the conversation you just had. Each subagent receives exactly three things:
- Its task
- Its worktree path and branch
- A brief awareness summary of what the other agents are working on
That third item is important. The awareness summary isn't about coordination, subagents can't communicate with each other and have no shared state. It's just enough information to know which files to stay out of. "Agent B is touching DealsIndexQuery. Agent C is adding tests only, no production code." That's all any agent needs to know about its neighbors.
Parallel Agent Orchestrator Architecture
The communication structure looks like this:
You
└── Orchestrator (knows everything)
├── Agent A — feat/brands-export
│ knows: its task + "B is on deals, C is on tests"
├── Agent B — feat/deals-pagination
│ knows: its task + "A is on brands, C is on tests"
└── Agent C — feat/campaign-tests
knows: its task + "A is on brands, B is on deals"
Context flows down from orchestrator to agents at spawn time. Results flow back up when each agent finishes. That's the only communication channel. There's no message passing between agents, no shared memory, no coordination layer.
This is by design. Agents with too much context produce worse output. They second-guess themselves. They try to accommodate things they don't need to accommodate. The narrower the context, the more focused the work.
Each subagent implements its task, writes tests, runs them, commits, and reports back. It doesn't merge. It doesn't push. It doesn't make decisions about what happens next. That's the orchestrator's job.
What comes back when they're done
When all three agents finish, the orchestrator produces a results summary:
✅ Parallel run complete
feat/brands-export
→ Added CSV export to BrandsController + Index.vue
→ Test: BrandsExportTest — 3 assertions passing
→ Review: http://brands-export.app.test
feat/deals-pagination
→ Fixed off-by-one error in DealsIndexQuery
→ Test: DealsPaginationTest — 5 assertions passing
→ Review: http://deals-pagination.app.test
feat/campaign-tests
→ Added 8 tests: auth, attacker-tests, CRUD, E2E
→ All passing
→ Review: http://campaign-tests.app.test
Three tasks. Three live environments. Three sets of passing tests. All done while you were doing something else.
One more thing the orchestrator handles: merge conflict detection. If two agents touched the same file, it flags it, but not before they ran. During implementation, two agents editing the same file in two separate worktrees is completely fine. They're isolated. They can't conflict. The conflict only appears when you merge the second branch into staging.
So the flag appears exactly when it matters:
⚠️ Merge warning: brands-export and deals-pagination both modified
BrandController.php. Resolve conflict when merging the second branch.
You know about it before you hit it. You can plan the merge order accordingly.
Agents working autonomously to finish their work
The Docker analogy that made it click
I spent a while trying to explain this system to other developers before I found the framing that made it land.
Docker containers for branches.
When you run a multi-service application locally, you don't run all the services on the same port and wonder why things break. You don't "switch" between your database and your API server. You run them simultaneously, each isolated, each on its own port, each unaware of the others. Docker Compose is the orchestrator, it knows about all of them. The containers don't know about each other.
That's exactly this system. Each worktree is a container. Each branch is a service. The parallel agent is docker-compose. You, reviewing and merging, are the operator deciding what gets deployed.
The only difference is what's being isolated. Docker isolates services. Worktrees isolate branches. Same mental model, different problem.
Once I had this framing, the whole thing felt obvious. Of course you'd run parallel. Of course you'd isolate. Of course there would be an orchestrator. The question is just: why wasn't this the default way of working from the start?
What my workflow looks like now
Concretely, today:
I have three worktrees running. A performance epic, large refactor, takes days, always available in the browser at its own URL. A feature branch, new capability I'm building incrementally. A PR I'm reviewing, live in the browser, I can click around and test it before approving.
None of them interfere with each other. None of them interfere with me. When I want to check the performance epic I open a tab. When I want to review the PR I open a different tab. When I'm done with the feature I merge it and tear down the worktree.
Context switching is gone. Not reduced but gone. I don't switch branches. I haven't in months. I open a different window.
The shift isn't just about productivity. It's about clarity. Every environment is clean, intentional, and scoped to exactly one thing. You always know what state you're looking at, because that environment has never been anything else.
My job in this workflow is to direct, review, and decide. The agents do the implementation. I do the judgment.
How to set this up yourself
The good news: git worktrees are built into git. No installation, no plugins, nothing extra. They've been there since git 2.5.
The setup has four parts:
Git worktrees: already available.
git worktree addis all you need at the primitive level.A local domain server: Laravel Herd or Valet on Mac. This is what makes each worktree accessible as its own
.testdomain in the browser. If you're not on a PHP stack, any local server that can serve from a directory will work.The /worktree skill: a Claude Code command file. About 150 lines of markdown that tells Claude exactly what to do: create the worktree, patch the
.env, link to Herd, install dependencies, report back. No code, just instructions.The /parallel agent: another command file that calls the worktree skill once per task and spawns agents in parallel. About 180 lines.
Both files live in .claude/commands/ in your project. Claude Code loads them automatically. You type the command, Claude reads the instructions and executes them.
I'll share both files as a GitHub gist: DM me. If you're not on Laravel the worktree logic is universal, you'll just need to adjust the .env patching and domain linking for your stack.
The bigger idea
AI coding assistants are getting faster every month. The models are improving. The tools are improving. The time between "ask" and "done" is shrinking rapidly.
The bottleneck is no longer how fast the AI can write code.
It's how fast you can review it, validate it, and ship it.
That's a fundamentally different problem. And it requires a fundamentally different way of working. You can't review three things at once if they're all in the same environment. You can't let agents work autonomously if they're all fighting over the same working directory. You can't scale your output if you are still the one doing all the implementation.
Parallel agents with isolated environments is one answer to this bottleneck. Not the only answer. But a real one.
The shift it asks you to make is this: stop thinking of yourself as the developer who writes the code, and start thinking of yourself as the orchestrator who directs, reviews, and ships it.
Nobody should look at this as a loss. This is leverage.
The developers who figure this out first will ship faster, context-switch less, and scale their output in ways that look disproportionate to the hours they are working.
The ones still switching branches manually while their AI waits will wonder why.
The orginal article appeared on LinkedIn
Happy to answer questions about the setup in the comments. Drop the link to this post if you share it, would love to see how others adapt this for their stacks.

Top comments (0)