Skip to main content
  1. Posts/
  2. Docker Compose/

·1698 words·8 mins·
Jaume Sabater
Author
Jaume Sabater
CTO and systems engineer

Orchestration, production, and best practices with Docker Compose
#

Once your application stack is defined and configured, the focus shifts to how it runs, scales, and adapts to different environments. In this final part, we will look at orchestration features such as dependencies and health checks, how to manage containers through common lifecycle commands, and how to prepare your setup for production using override files. We will finish with practical recommendations and best practices to help you deploy and maintain your Docker Compose projects effectively.

Dependencies
#

In multi-service environments, certain containers must start before others. For example, a web application typically depends on a database, a cache, or an object storage service. Docker Compose provides the depends_on directive to express these relationships and ensure that dependent containers are started before the services that rely on them.

However, it is important to note that depends_on only guarantees startup order, not readiness. A container may be running but not yet ready to accept connections.

services:
  apache:
    depends_on:
      - postgres
      - redis
      - garage
    [..]

In this example, apache will not start until postgres, redis, and garage containers have been started. This roughly corresponds to what you would otherwise have to do manually with the Docker CLI:

docker run -d postgres
docker run -d redis
docker run -d garage
docker run -d apache

Health checks
#

While depends_on governs startup sequence, health checks determine readiness, i.e., whether a service is actually functional and responsive.

Health checks allow Compose to monitor the state of a container over time, retrying or reporting issues if the service is not yet ready or becomes unhealthy.

You define a health check under each service with the healthcheck key, which specifies a command to test the service, how often to test it, and when to consider it healthy or unhealthy.

services:
  postgres:
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "demouser"]
      interval: 5s
      retries: 5
    [..]

In this configuration:

  • test runs the command pg_isready -U demouser inside the container to check whether PostgreSQL is accepting connections.
  • interval defines how often the test runs (every 5 seconds in this case).
  • retries sets how many consecutive failures are tolerated before the service is marked as un`healthy.

When a service has a health check defined, its health status becomes visible through Docker commands such as docker ps or docker inspect

Full example
#

Here is a complete Compose file using four services, with meaningful use of volumes, networks, and healthchecks.

services:

  apache:
    build:
      dockerfile: docker/apache/Dockerfile
    image: myphpapp/apache
    ports:
      - "8080:80"
    volumes:
      - uploads:/var/www/html/uploads
    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_started
      garage:
        condition: service_healthy
    networks:
      - myphpapp

  postgres:
    build:
      dockerfile: docker/postgres/Dockerfile
    image: myphpapp/postgres
    environment:
      POSTGRES_USER: demouser
      POSTGRES_PASSWORD: mypassword
      POSTGRES_DB: demodb
    volumes:
      - postgres:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD", "pg_isready", "-U", "demouser"]
      interval: 5s
      retries: 5

  redis:
    image: redis:8.2
    command: ["redis-server", "--save", "60", "1", "--loglevel", "warning"]

  garage:
    image: dxflrs/garage:v2.1.0
    ports:
      - "3900:3900"  # S3-API
      - "3902:3902"  # Web UI
      - "3903:3903"  # Admin API
    environment:
      # you can optionally set RUST_LOG for debugging/visibility
      - RUST_LOG=garage=info
    volumes:
      - garage_meta:/var/lib/garage/meta
      - garage_data:/var/lib/garage/data
      - ./garage.toml:/etc/garage.toml:ro
    healthcheck:
      test: ["CMD", "/garage", "status"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 30s
    networks:
      - myphpapp

volumes:
  postgres:
  uploads:
  garage_data:
  garage_meta:

networks:
  myphpapp:

Key aspects of this docker-compose.yml file:

  • postgres, garage, uploads: named volumes for persistent storage.
  • depends_on with health checks ensure Apache only starts after PostgreSQL and MinIO are healthy. The application can work without cache.
  • networks: myphpapp: all containers share a private network for internal communication.

Once we have the file, all we need to do is:

docker compose build
docker compose up -d

Garage quickstart
#

Regarding the use of Garage S3 Object Storage, here you are some quick start commands, which can be launched either from the host or inside the container, after bringing it up for the first time:

# Create a key for your PHP app
docker exec -it <garage_container> garage key create appuser

# List keys
docker exec -it <garage_container> garage key list

# Create a bucket for uploads
docker exec -it <garage_container> garage bucket create uploads

# Grant full access to that key
docker exec -it <garage_container> garage bucket allow uploads --key <KEY_ID> --read --write

A working garage.toml for development could look like this:

# Unique node identificator and replication factor
node_id = "node1"
replication_factor = 1

# Internal RPC settings
rpc_bind_addr = "0.0.0.0:3901"
rpc_secret = "bc92a31f07afb47b94f4275ec1729a2d4bf1241ba7497e305f1a19699cecee42" # openssl rand -hex 32

# Data directories matching Docker volume mounts
metadata_dir = "/var/lib/garage/meta"
data_dir = "/var/lib/garage/data"

# S3 API
[s3_api]
api_bind_addr = "0.0.0.0:3900"
s3_region = "garage"
# root_domain = "s3.garage.local"  # optional, only needed for virtual-hosted buckets

# Admin API
[admin]
api_bind_addr = "0.0.0.0:3903"

# Web dashboard (optional)
[web]
api_bind_addr = "0.0.0.0:3902"

# Metadata backend
[metadata]
backend = "sled"

# Logging
[log]
level = "info"

Common commands
#

Docker Compose can show all running services with a single command:

docker-compose ps

And it can also show logs from all services with a single command:

docker-compose logs

If you would rather just see the logs of a specific service, use this command:

docker compose logs --follow apache

As with the docker logs command, you can also use --until, --before, and --tail, among others.

To execute commands in a running container, use the following command:

docker compose exec apache bash

The docker compose exec command allocates a pseudo-TTY and operates in interactive mode by default.

Docker Compose can also scale services, when applicable:

docker compose up --detach --scale apache=2

It can shut down all services with a single command. This stops and removes containers, networks, and optionally volumes:

docker compose down

To also remove volumes, add the -v or --volumes argument:

docker compose down --volumes

Here you are a summary table with the most used commands:

Command Description
docker compose up -d Build and start all containers
docker compose down Stop and remove containers and networks
docker compose ps List running services
docker compose logs -f Follow logs
docker compose exec <service> bash Open a shell in a service
docker compose up --scale apache=2 Scale services

Overrides
#

Overrides are useful for developmennt, staging and production setups. As a starting point, you would have a docker-compose.yml and a docker-compose.<override>.yml. Then you would run the command as follows:

docker compose --env-file .env.<override>  --file docker-compose.yml \
  --file docker-compose.<override>.yml up --detach

These override files extend your existing docker-compose.yml to tailor it for staging or production use. The typical pattern is to keep development defaults in docker-compose.yml and override or disable development-specific settings in docker-compose.staging.yml and docker-compose.production.yml.

Example production changes would be:

  • Use pre-built images, i.e., use image instead of build:.
  • Disable bind mounts, e.g., ./src:/var/www/html.
  • Set fixed volume mounts for persistence.
  • Restrict external ports, only exposing what is necessary.
  • Add resource limits.
  • Use environment variables securely, i.e., loaded from .env.production.
  • Optionally, mark the backend network as internal.

Let’s say we have the following docker-compose.yml file in our development environment:

  apache:
    build:
      dockerfile: docker/apache/Dockerfile
    image: myphpapp/apache
    ports:
      - "8080:80"
    volumes:
      - ./src:/var/www/html
    environment:
      - DEBUG=true

And we have the following docker-compose.production.yml file:

services:
  apache:
    build: null            # disable building
    image: myphpapp/apache
    ports:
      - "80:80"
    volumes: []            # no mounts
    environment:
      - DEBUG=false

When running the following command:

docker compose --env-file .env.production --file docker-compose.yml \
  --file docker-compose.production.yml up --detach

Docker Compose merges both files, in order:

  • It starts from the base file (docker-compose.yml).
  • It then applies the overrides from docker-compose.production.yml, layer by layer.
  • The result is a single, merged configuration that Compose uses to start your services.

Therefore, the effective merged result would be:

services:
  apache:
    build: null            # overriden
    image: myphpapp/apache
    ports:
      - "80:80"            # overridden
    volumes: []            # overriden
    environment:
      - DEBUG=false        # overridden

The merged config will omit the build key completely, meaning Compose will not try to build the image. Instead, it will simply pull or use the existing myphpapp/apache image.

Full example
#

A more complete example of a docker-compose.production.yml override file could look like this:

services:

  apache:
    build: null
    image: myphpapp/apache
    ports:
      - "80:80"
    volumes: []
    environment:
      APP_ENV: production
      DB_HOST: postgres
      DB_NAME: ${POSTGRES_DB}
      DB_USER: ${POSTGRES_USER}
      DB_PASSWORD: ${POSTGRES_PASSWORD}
      REDIS_HOST: redis
      GARAGE_ENDPOINT: http://garage:3900
      GARAGE_ACCESS_KEY: ${GARAGE_ACCESS_KEY}
      GARAGE_SECRET_KEY: ${GARAGE_SECRET_KEY}
    deploy:
      resources:
        limits:
          cpus: "1.0"
          memory: 512M
    networks:
      - frontend
      - backend

  postgres:
    image: postgres:17
    restart: always
    environment:
      POSTGRES_USER: ${POSTGRES_USER}
      POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
      POSTGRES_DB: ${POSTGRES_DB}
    volumes:
      - postgres:/var/lib/postgresql/data
    networks:
      - backend
    deploy:
      resources:
        limits:
          cpus: "0.5"
          memory: 512M

  redis:
    image: redis:8.2
    restart: always
    command: ["redis-server", "--save", "60", "1", "--loglevel", "warning"]
    sysctls:
      net.core.somaxconn: 511
      vm.overcommit_memory: 1
    volumes:
      - redis:/data
    networks:
      - backend
    deploy:
      resources:
        limits:
          cpus: "0.25"
          memory: 128M

  garage:
    image: dxflrs/garage:v2.1.0
    restart: always
    environment:
      - RUST_LOG=garage=info
    ports:
      - "3900:3900"
    volumes:
      - garage_meta:/var/lib/garage/meta
      - garage_data:/var/lib/garage/data
      - ./garage.toml:/etc/garage.toml:ro
    networks:
      - backend
    deploy:
      resources:
        limits:
          cpus: "0.5"
          memory: 256M

volumes:
  postgres:
  redis:
  garage-data:
  garage-meta:

networks:
  frontend:
    driver: bridge
    external: true
  backend:
    driver: bridge
    internal: true
    ipam:
      config:
        - subnet: 172.28.0.0/16

Our .env.production file would be similar to this:

# PostgreSQL
POSTGRES_USER=<postgres_user>
POSTGRES_PASSWORD=<postgres_password>
POSTGRES_DB=<postgres_database>

# Garage
GARAGE_ACCESS_KEY=<access_key>
GARAGE_SECRET_KEY=<secret_key>
GARAGE_REGION=garage             # default region
GARAGE_ENDPOINT=http://garage:3900

Note that the following directives changed from the development settings:

Feature Development Production
Apache image Built locally (build) Pulled from registry (image)
Code mounting Bind mount in apache service Removed for immutability
Exposed ports 8080:80 80:80
Resource limits None Added under deploy.resources.limits
Backend network Default bridge Internal network, isolated
Environment variables Inline values
Restart policy Default (none) restart: always for resilience
Redis persistance In-memory only Cache is persisted in volume
Network Default Separated backend and frontend

Common pitfalls and best practices
#

Even with a solid understanding of Docker Compose syntax, it is easy to fall into subtle configuration traps or overlook practical habits that make projects maintainable.

This section summarizes essential guidelines to help ensure that your Compose environments remain secure, predictable, and portable across stages (development, staging, and production).

  • Use .env files for secrets and configuration. Keep credentials and environment-specific values out of your Compose files.
  • Use named volumes for persistence: Anonymous volumes are recreated easily and can lead to data loss.
  • Always name networks explicitly. It improves readability and avoids unexpected network reuse across projects.
  • Keep containers small. Run one main service per container to preserve modularity and scalability.
  • Avoid latest tags in production. Pin image versions to prevent unintended updates from breaking your setup.
  • Document your Compose setup in a README.md. Include startup instructions, environment variable descriptions, and any project-specific notes.

Related

·917 words·5 mins

·1504 words·8 mins

·205 words·1 min