Skip to main content
Version: 0.9.12

init.sh and update.sh

platform v0.9.11verified 2026-05-14

Two scripts run on every service host. They divide responsibility cleanly:

  • init.sh — service-specific. Lives at /opt/services/<service>/init.sh. Knows how to bring this service up: bootstrap env, fetch config, decode TLS material, pull images, start containers, wait for health.
  • update.sh — shared deployment manager. Lives at /opt/services/common/update.sh (and is symlinked / copied into each service dir on fetch-config). Wraps init.sh with backup, confirmation, dry-run, and the standard CLI flags. Use this in steady state.

init.sh (service-specific)

What every init.sh does is covered step-by-step in Bootstrap and init. The condensed view is:

  1. source /opt/deployment/.env (bootstrap).
  2. fetch-config (S3 sync) unless --skip-fetch.
  3. Configure Docker log rotation on first run.
  4. Optional: auto-detect PRIVATE_IP.
  5. eval $(common/fetch-env.sh --format export ...) — env resolved into shell memory.
  6. Service-specific magic — TLS material decode, IP detection, log-level mapping, etc.
  7. aws ecr get-login-password | docker login and docker compose pull unless --skip-images.
  8. shred any leftover .env on disk.
  9. docker compose up -d --force-recreate --remove-orphans.
  10. Healthcheck loop, then prune unused images.

Each service's "magic" varies — see Bootstrap and init for the per-service table.

init.sh flags

FlagEffect
--skip-imagesDon't pull Docker images. Use after a config change that doesn't ship new images.
--skip-fetchDon't fetch-config from S3. Use when iterating on local edits to /opt/services/<service>/.
--restart-onlyImplies --skip-images. Re-resolve env and restart containers.

init.sh exits non-zero on any failure — bootstrap missing, env validation failed, image pull failed, healthcheck timeout — and prints diagnostic context to stderr.

update.sh (deployment manager)

update.sh is the recommended entry point in steady state. It's the same script across every service (sourced from common/update.sh); each service dir has a copy after fetch-config.

cd /opt/services/api
./update.sh --ecr-tag v0.9.11

What it does:

  1. Parse CLI flags.
  2. Update /opt/deployment/.env with new values for ECR_TAG / CONFIG_REF (only the fields you supplied).
  3. Show planned changes; prompt for confirmation unless --yes / -y.
  4. Back up the previous .env to /opt/deployment/backups/env/.env.<timestamp>.
  5. Call the service's init.sh with appropriate flags.

update.sh flags

FlagEffect
--ecr-tag TAGUpdate ECR_TAG in bootstrap .env, pull the new image, recreate containers.
--config-ref REFUpdate CONFIG_REF in bootstrap .env, re-sync the bundle from S3, recreate containers.
--skip-imagesPass through to init.sh — don't pull images.
--restart-onlyNo image pull, no S3 sync — just re-resolve env and restart. Use after an SSM / SM value change.
--dry-runPrint the planned changes and exit. Nothing is written.
--yes, -ySkip the confirmation prompt. Suitable for CI.

Examples

# Roll a new image tag
./update.sh --ecr-tag v0.9.11

# Roll a new config bundle (e.g. a Caddyfile or vars.yaml change)
./update.sh --config-ref v0.9.11-hotfix

# Restart-only after rotating REDIS_PASSWORD in Secrets Manager
./update.sh --restart-only

# Preview what the next update would change without applying
./update.sh --ecr-tag v0.10.0 --dry-run

# Non-interactive (CI)
./update.sh --ecr-tag v0.9.11 --yes

Backups and rollback

Every update.sh run snapshots the previous bootstrap .env:

ls -lah /opt/deployment/backups/env/
# .env.20260514_142301
# .env.20260513_180044

To roll back:

cp /opt/deployment/backups/env/.env.20260513_180044 /opt/deployment/.env
cd /opt/services/api && ./init.sh

The image / config the bootstrap .env pointed at the time of the backup will be re-pulled and started.

Driving updates remotely

The deployment bundle ships a scripts/update-service.sh that SSHes via Bastion to a service host and runs update.sh there:

# Update a single service across the deployment
./scripts/update-service.sh staging api ~/.ssh/id_itk_hetzner --ecr-tag v0.9.11

# Update one specific Voice instance
./scripts/update-service.sh staging voice-03 ~/.ssh/id_itk_hetzner --restart-only

# Rolling update across all Voice instances
./scripts/update-service.sh staging voice ~/.ssh/id_itk_hetzner --ecr-tag v0.9.11

# Preview without applying
./scripts/update-service.sh staging database ~/.ssh/id_itk_hetzner --ecr-tag v0.9.12 --dry-run

Syntax: ./scripts/update-service.sh <env> <service|service-NN> <ssh-key> [update.sh-flags]. All flags after the SSH key are forwarded to update.sh on the remote host.

Running init.sh directly

You will occasionally want to call init.sh without update.sh:

  • After your provisioning workflow recreates an instance from scratch (cloud-init usually does this for you on first boot).
  • After manually editing /opt/deployment/.env for a debug session.
  • When update.sh itself is broken (rare).

Same flags as documented above. There is no backup taken in this path — update.sh's safety net only applies to the manager.

Inspecting state on the host

# Service status
cd /opt/services/<service> && docker compose ps

# Bootstrap env
cat /opt/deployment/.env

# What env would the service resolve right now?
/opt/services/common/fetch-env.sh \
--manifest /opt/services/<service>/vars.yaml \
--bootstrap /opt/deployment/.env \
--format export | wc -l

# Last few `update.sh` runs
ls -lah /opt/deployment/backups/env/

See also