Push-to-deploy has been a solved problem for a while. I used Heroku years ago and Render for the past 2 years — both handle deployment well. But I wanted to run on my own server without giving up that convenience. Kamal gives me that.

I run one command and walk away:

kamal deploy

That’s it. Zero-downtime deploy. Docker containers built, pushed, and swapped on the server. Health checks pass before traffic switches. Old containers cleaned up. The whole thing takes about 2 minutes.

What Kamal Actually Does

Kamal — built by the Basecamp team, open source — deploys Docker containers to any server via SSH. No Kubernetes. No orchestration platform. No vendor lock-in. It connects to your server over SSH, pulls your Docker image, starts the new container, waits for health checks, then stops the old one.

The key pieces:

  • kamal-proxy as a reverse proxy, handling SSL via Let’s Encrypt and routing traffic to your containers
  • Docker for packaging your app into images
  • SSH for communicating with servers — no agent to install, no daemon to manage
  • Health checks before traffic switches — if the new container fails its health check, the old container keeps serving traffic

Kamal 2 simplified the configuration significantly from v1. A config/deploy.yml defines the core setup, with secrets in .kamal/secrets.

My Deploy Configuration

Here’s a stripped-down version of what my deploy file looks like for a typical SaaS app:

service: myapp
image: myregistry/myapp

servers:
  web:
    hosts:
      - 123.45.67.89
    cmd: node dist/server.js
    options:
      network: kamal
  worker:
    hosts:
      - 123.45.67.89
    cmd: node dist/worker.js
    options:
      network: kamal

proxy:
  ssl: true
  host: myapp.com
  app_port: 3000
  healthcheck:
    path: /health
    interval: 3

registry:
  server: ghcr.io
  username: myuser
  password:
    - KAMAL_REGISTRY_PASSWORD

env:
  clear:
    NODE_ENV: production
  secret:
    - DATABASE_URL
    - STRIPE_SECRET_KEY

accessories:
  db:
    image: postgres:16
    host: 123.45.67.89
    port: "127.0.0.1:5432:5432"
    env:
      secret:
        - POSTGRES_PASSWORD
    directories:
      - data:/var/lib/postgresql/data
    options:
      network: kamal

This defines two roles — a web server and a background worker — plus a PostgreSQL database as an “accessory” (a long-running container that persists across deploys). Secrets are read locally from .kamal/secrets and injected into the containers at deploy time.

The Deploy Flow

When I run kamal deploy, here’s what happens:

  1. Docker builds the image locally (or in CI)
  2. Image gets pushed to the container registry (I use GitHub Container Registry)
  3. Kamal SSHs into the server
  4. Pulls the new image
  5. Starts a new container alongside the old one
  6. Runs health checks against the new container
  7. Once healthy, kamal-proxy switches traffic to the new container
  8. Old container is stopped and removed

No downtime. If the health check fails, the old container keeps serving traffic. I get an error, fix the issue, and deploy again.

Why Not Just Stay on a PaaS?

Render and Heroku work fine for deployment. The issues are elsewhere:

  • Cost — PaaS pricing scales fast. A server, worker, and database on Render can easily hit $100+/month for what a $20 Hetzner box handles. Run multiple projects — common for indie developers — and the gap widens quickly
  • Vendor lock-in — your deploy pipeline, environment config, and scaling model are all tied to the platform
  • Limited control — need a custom network setup, a specific Postgres extension, or to tweak container resources? You’re at the mercy of what the platform supports

Kamal gives me the same push-one-thing-and-it-deploys convenience, but on my own server. The container is built once and runs identically everywhere. If the deploy fails, the old container is still running. Rolling back is kamal rollback <VERSION>.

Zero-Downtime Rolling Updates

Kamal starts the new container, runs health checks, and only switches traffic after the new container is confirmed healthy. The old container keeps serving requests during the transition.

For my users, a deploy is invisible. No maintenance windows. No “we’ll be right back” pages. I deploy during peak hours without thinking twice.

Deploying Multiple Roles

Most SaaS apps aren’t just a web server. I have background workers for job queues, cron containers for scheduled tasks, and sometimes separate API services. Kamal handles this with roles:

kamal deploy                   # deploy everything
kamal deploy --roles=web       # deploy only the web role
kamal deploy --roles=worker    # deploy only the worker

I have a script that detects which parts of the codebase changed and only deploys the affected roles. A frontend-only change doesn’t restart the worker.

Secrets Management

Kamal 2 reads secrets locally from .kamal/secrets when you run a deploy. It can pull from environment variables, 1Password, or other adapters. The simplest setup is referencing local env vars:

# .kamal/secrets
DATABASE_URL=$DATABASE_URL
STRIPE_SECRET_KEY=$STRIPE_SECRET_KEY

Kamal injects these into the container at deploy time. For adding or changing a secret, I update my local env and deploy again. The fresh container picks up the new values.

No external secrets manager required. For a solo developer running one or two servers, this is plenty.

Getting Started

The initial setup takes an afternoon — install Docker on your server, create a deploy.yml, push your first deploy. After that, every deploy is one command.