Sergey Kopanev: you sleep — agents ship

Go Back
Building AI Autopilot · Part 11

300 Lines of Bash Replaced Six AI Skills


Building AI Autopilot for code, research, and workflows.

I’m building HLTM — a personal automation layer that runs AI agents on projects autonomously. The goal: drop a brief, walk away, come back to working code. After six versions of LLM-as-orchestrator, I stopped trying to fix the design and deleted it.

v1.0.0 was a clean break.

The orchestrator kept pretending to be deterministic. It was not.

So I moved control into code that cannot improvise.

What Got Deleted

hltm-autopilot/
hltm-implementation/
hltm-actualize/

Gone. Every skill, every reference file, every version of the orchestrator that ran inside Claude.

Six skills across three components. Hundreds of lines of prompt engineering. Four months of iteration.

Deleted in one commit.

What Replaced It

hltm-loop.sh

300 lines of bash. One file.

No agent reads it. No model interprets it. It’s not a prompt. It’s a program.

How the Loop Works

Every round follows the same sequence.

Spawn agent with fresh context and the current prompt file. Agent does the work. Agent emits a signal tag. Bash reads the tag. Routes to next stage. Repeat.

[fresh context + prompt] → agent → <loop:stage next="review"> → bash → [fresh context + review prompt] → agent → <loop:stage next="develop"> → bash → ...

The signal is the only contract between the agent and the loop. Four XML tags. That’s the interface.

No state persists between rounds. The agent reads snapshot files every time to understand the project. Not memory. Not accumulated context. Snapshot files. Deterministic.

Why Fresh Context Matters

The orchestrator problem — the one I spent four months failing to solve — was context accumulation.

Each round, the LLM orchestrator added more to the context. Its own reasoning. The agent’s output. The routing decision. The next agent’s output. Round five looked nothing like round one. By round ten, the model was navigating a wall of stale text and making probabilistic decisions about what was still relevant.

Fresh context per round means round ten looks exactly like round one. Same inputs, same prompt, current snapshot files.

Context size dropped roughly 80% per round compared to the accumulated LLM orchestrator. Not because I was smarter about what to include. Because I was smarter about what to throw away.

Throw away everything. Every round.

Why Single-Threaded

The loop is deliberately single-threaded.

No parallel agents. No concurrent rounds. One agent at a time.

This was a choice, not a limitation.

Parallel agents need coordination. Coordination needs state. State needs a manager. The manager is the problem I was trying to get rid of.

One agent runs. Finishes. Emits a signal. Next agent runs. No race conditions. No deadlocks. No “agent B read a half-written file from agent A.”

Simple beats clever here. Reliable beats fast.

The Commit Message

It was the longest commit message I’ve written.

Listed everything deleted, line by line. Listed everything the 300 lines replaced. Noted the context reduction. Noted that the first real project run produced zero phantom hallucinations — the reviewer didn’t invent any vulnerabilities, the coder didn’t reference any functions that didn’t exist.

I kept that message because the reasoning matters more than the code.

The Insight

I was using intelligence for a job that needs reliability.

Claude is intelligent. Bash is reliable.

The orchestrator’s job is mechanical. Call the right agent. Route by signal. Stop when done.

It’s a state machine. State machines don’t benefit from intelligence. They benefit from determinism.

Bash is deterministic. I stop fighting the wrong tool.


Next: Claude gets expensive. Codex gets killed. The loop doesn’t care..