Wrapd turns CLI commands into HTTP endpoints. So naturally, we asked ourselves: can we use Wrapd to deploy Wrapd?
The answer is yes. Mostly.
The setup
Wrapd runs on a single server. The stack is:
- API (Node/Express). Handles auth, endpoints, pipelines, billing
- Tunnel (Rust). WebSocket relay between callers and agents
- Dashboard (SvelteKit). The UI
- Worker (Rust). Executes cloud runner jobs in Docker
- Docs (Astro/Starlight)
- Caddy. Reverse proxy, TLS
- Postgres + Redis. Data and job queues
Everything runs in Docker via docker compose. Images are built on the server, pushed to GHCR, pulled back into the compose stack, and restarted.
The endpoints
We created a set of Wrapd endpoints on the same server that runs the production stack. Each one wraps a single deploy operation:
git-pull-latest # git fetch && reset --hard
docker-login # echo $GHCR_TOKEN | docker login
build-api-image # docker build -t ghcr.io/.../api
build-dashboard-image # docker build -t ghcr.io/.../dashboard
build-tunnel-image # docker build -t ghcr.io/.../tunnel
build-worker-image # docker build -t ghcr.io/.../worker
build-docs-image # docker build -t ghcr.io/.../docs
push-all-images # docker push (all 5)
pull-images # docker compose pull
run-migrations # docker compose run --rm api npm run migrate:up
restart-api # docker compose up -d --force-recreate api
restart-dashboard # docker compose up -d --force-recreate dashboard
restart-worker # docker compose up -d --force-recreate worker docs caddy
restart-tunnel # docker compose up -d --force-recreate tunnel
deploy-verify # docker compose ps && curl health
deploy-cleanup # docker image prune -a -fEach endpoint is a single, focused command. No scripts, no conditionals, just the one thing it does.
The pipelines
We wire these into two pipelines:
build-images
This pipeline pulls the latest code, logs into GHCR, builds all 5 service images in parallel, then pushes them.
git-pull-latest
↓
docker-login
↓
parallel — build all 5 images concurrently
| build-api
| build-dashboard
| build-tunnel
| build-worker
| build-docs
↓
push-all-imagesThe parallel node is the key. Building 5 Docker images sequentially takes forever. With Wrapd's parallel execution, all 5 builds run at the same time, and the pipeline waits for all of them to finish before pushing.
full-deploy
This is the main deploy pipeline. It calls build-images as a sub-pipeline, then handles the deploy side:
deploy-cleanup # free disk space first
↓
build-images # sub-pipeline (parallel builds + push)
↓
pull-images # pull fresh images into compose
↓
run-migrations # DB schema updates
↓
restart-api # recreate API container
↓
restart-dashboard # recreate dashboard container
↓
restart-worker # recreate worker + docs + caddy
↓
deploy-verify # health check before the risky part
↓
restart-tunnel # last step (kills agent connection)The "mostly" part
Here's the thing about deploying Wrapd with Wrapd: the pipeline runs inside the API process. When the API container restarts (step 5), the pipeline that ordered the restart dies with it.
We solved this by ordering carefully:
- Build and push first. The images are safely in GHCR before we touch anything.
- Migrate before restarting. The new schema is compatible with both old and new code.
- Restart the API, then keep going. Wait, how? The pipeline just died. Actually, it doesn't. The API restart is fast enough (< 5 seconds) that the SSE stream sometimes survives. When it doesn't, the remaining steps don't run, but that's fine because we verify first and restart the tunnel last.
- Tunnel restart is always last. This is the nuclear option: it kills the WebSocket connection between the agent and the hub. The agent auto-reconnects, but the pipeline is dead at this point.
In practice, the pipeline usually completes through verify. The tunnel restart is fire-and-forget: it sends the restart command, the tunnel goes down, the agent reconnects 5 seconds later, and everything is fine.
The deploy script
For the cases where the pipeline can't finish (or when we want a guaranteed full deploy), we also have a deploy.sh script, wrapped as its own Wrapd endpoint:
curl https://api.wrapd.sh/v1/josejuanqm/full-deploy-script \
-X POST -H "X-API-Key: $KEY"This endpoint runs git pull && ./deploy.sh on the server. It does the same thing as the pipeline but runs as a single shell process outside the Docker stack, so it survives container restarts. It builds, pushes, pulls, migrates, and restarts everything.
The key detail: it only recreates app services, not the database. Recreating Postgres kills all active sessions and logs everyone out. We learned that one the fun way.
What triggers a deploy?
Right now: a push to main. We have a GitHub Actions workflow that builds, tests, and deploys on every push. But GitHub Actions has been unreliable for us (billing issues, runner availability), so the deploy script is our fallback.
The plan is to add a webhook endpoint that listens for GitHub push events and triggers the full-deploy pipeline automatically. The pipeline is already set up, we just need to flip the trigger from manual to webhook and point GitHub at it.
# GitHub webhook → Wrapd pipeline → deploy
POST https://api.wrapd.sh/v1/josejuanqm/full-deploy
X-GitHub-Event: push
X-Hub-Signature-256: sha256=...When that's live, every push to main will trigger a full build, test, and deploy, all orchestrated by the same product it's deploying. Full circle.
Lessons learned
- Order matters. Restart the thing that kills you last. Build and verify before you touch running services.
- Parallel builds are worth it. 5 sequential Docker builds take 10+ minutes. In parallel, it's the time of the longest one (~3 minutes).
- Don't recreate your database on deploy. Only recreate app services. Your users' sessions will thank you.
- Have a fallback. The pipeline is great for 90% of deploys. The SSH script handles the rest.
- Dogfooding works. We found and fixed a dozen bugs by using our own pipelines for real deployments: parallel node support in sub-pipelines, tilde expansion in working directories, event handling in the test UI.
Try it yourself
The endpoint + pipeline pattern isn't specific to Wrapd's own deploy. You can use the same approach to deploy any Docker Compose stack, Kubernetes service, or even a bare-metal server. Import our deploy templates and customize the commands for your stack.