Complete Docker Compose tutorial for orchestrating containers

Last update: 17/12/2025
Author Isaac
  • Docker Compose allows you to define and manage multiple containers from a single YAML file, making it easier to work with microservices-based applications.
  • The services section is mandatory and is completed with networks, volumes, configs and secrets to control communication, persistence and configuration.
  • Examples like Flask+Redis or a full stack app with frontend, backend and database show how Compose simplifies development and deployment.
  • Los commands docker compose up, down, ps and logs They form the basic workflow for lifting, debugging, and stopping stacks of containers.

Docker Compose tutorial step by step

If you've already experimented with containers and seen that for a "real" app you need more than one service running at the same time (database, API, frontend, cache…), sooner or later you'll run into Docker Compose. This tool lets you set up all that equipment with a single file and a couple of commands, without juggling terminals and endless scripts.

In this tutorial you will learn what Docker Compose is, how its file works compose.yaml and how to orchestrate applications Clean multi-container setups: from simple examples with Flask and Redis to more complex architectures with frontend, backend, and database. You'll also learn about networking, volumes, configurations, secrets, and the key commands to work comfortably in development and more demanding environments.

What is Docker Compose and why is it worth using?

Docker Compose is a Docker extension that allows define and manage multiple containers as if they were a single application. Instead of starting each service manually with its "docker run" and parameters, you describe everything in a YAML file and start it with a single command.

The beauty of Compose is that many modern applications are built on it. microservices that live in individual containersA database, an API, a frontend, a queuing system, a Redis-like cache, etc. Docker recommends that each container run a single service, so if you try to fit everything into a single image, you end up with a monster that's difficult to scale and maintain.

You could run two or more services within the same container, but That negates many of Docker's advantages.If one fails, it drags the others down; you can't scale only the part that receives the most load, and managing logs, resources, and failures becomes a major mess.

With Docker Compose you define each service separately, specifying how they connect to each other, what data they persist, what ports they expose, what environment variables they use… In this way If one container fails, the rest can continue to function. depending on how you configure it, and scaling a specific piece is as simple as modifying its settings or the number of replicas.

Furthermore, Compose fits perfectly into workflows of CI/CD and deployment to productionYou can use it directly with tools like Portainer or Docker Swarm, and if you work with Kubernetes, projects like Kompose allow you to translate a compose.yaml file into Kubernetes manifests without having to rewrite everything by hand.

Configuring services with Docker Compose

Prerequisites for following the Docker Compose tutorial

To comfortably follow the examples in this tutorial you need have Docker and Docker Compose installedToday there are two main paths:

  • Docker Engine + Docker Compose installed as standalone binaries.
  • DockerDesktop, which includes Docker Engine, Docker Compose, and a graphical interface.

It's important that you have a minimum foundation of basic Docker commands (images, containers, ports, volumes) and don't be afraid to use the command line. The examples are usually assumed in Linux (for example Ubuntu 22.04), but the logic applies equally in Windows and macOS with Docker Desktop.

Check that everything is in order by running it on your terminal something as simple as “docker –version” and “docker compose version”If both commands respond without error, you are ready to continue with the examples.

Basic structure of a compose.yaml file

The heart of Docker Compose is the file compose.yaml (or docker-compose.yml)There we describe which services we want to set up and how they should be related. Although previously the field was used version To mark the format version, current documentation recommends not defining it so that the most recent version of the schema is always used.

Within the Compose file you will have several possible sections, although only one is mandatory: servicesFrom there you can add other sections depending on the complexity of your project:

  • services: definition of each microservice (mandatory).
  • networks: custom networks to control communication between containers.
  • volumes: volumes to persist data or share it between services.
  • settings: service configuration (e.g., web server configuration files).
  • secrets: management of sensitive information (passwords, API keys…).

Throughout this tutorial you will see how to combine all these sections for a typical project that includes a application, a database, and an APIand also an example of a web app in Python with Flask and Redis.

Services in Docker Compose: the core of the definition

The section services is the essential piece from any Compose file. In it you define each of the containers that will make up your application, giving them the name you want (for example, web, database, api, redis, etc.).

For each service you can establish a good number of parametersAmong them are some that are widely used in real-world projects:

Parameter build indicates where the Dockerfile is located from which the service image will be built. Typically, a context (directory) is specified where the Dockerfile of the application you want to package resides.

If you already have an image created or want to use one from the registry, you use image to reference itThe name follows the format [<registry>/][<project>/]<image>[:<tag>|@<digest>]And if you need to control when that image is downloaded or updated, you can use pull_policy.

Field ports It is used to map ports between the host and the containerThe syntax is of the type [HOST:]CONTAINER[/PROTOCOL]For example, if a PostgreSQL database is listening on port 5432 inside the container and you want to expose it on port 5555 of the host, you would do something like this: "5555:5432" in the list of ports.

  Complete awk tutorial for Linux: a practical step-by-step guide

The restart policy is controlled with restartwhich indicates what to do when a container terminates in error or stops. Typical values ​​are no, always, on-failure y unless-stoppedallowing critical services to remain operational even if they experience occasional outages.

If one service needs another to be available before it starts, you can use depends_on to define dependencies between containersA classic example is an app that requires the database to be up and running to avoid failing the initial connection.

For configuration and credentials, you have two common approaches: env_file y environment. With env_file You are pointing to one or more files .env with the environment variables, while in environment You can list them directly in the YAML. The best practice is to use files. .env to prevent passwords and sensitive data from being embedded in the compose.yaml file itself.

Parameter volumes allows mounting host paths or volumes Within the container, you'll use both data persistence and folder sharing between services. Here, you'll only reference the volumes that you can later define in the section above. volumes if you need them to be shared or managed more explicitly.

With these fields you can already build fairly complete services. The Compose specification includes many more advanced options (health, resource limits, commands of Bootetc.), but with these you already cover most common uses.

Example 1: Web application in Python with Flask and Redis

A typical example to understand Docker Compose is creating a simple web application in PythonUsing Flask to serve pages and Redis as an in-memory store for a hit counter. The idea is that you don't need to install either Python or Redis on your machine: everything runs inside containers.

The workflow would be something like this: first you create a directory for the project and inside you add a file app.py with the Flask code. In that code you use "redis" as the hostname and port 6379, which is the default port for the Redis service in your container.

The function that manages the visitor counter It tries to connect to Redis several times. Before giving up, keep in mind that the Redis container may take a few seconds to become available when you lift the entire stack.

In addition to app.py, you create a file requirements.txt with Python dependencies (for example Flask and redis-py), and a Dockerfile that specifies how to build your web application image: base Python image (3.7, 3.10 or whatever), working directory, environment variables for Flask, gcc installation and system dependencies, copy of requirements.txt, package installation and code copy.

In the Dockerfile you also mark the port that will display the container (for example 5000) and you define the default command, normally flask run --debug or similar, so that it starts automatically when the container is created.

With all this ready, the compose.yaml file defines two services: one called, for example, web, which is built from the project's Dockerfile and exposes port 8000 externally (mapping the host's port 8000 to the container's port 5000), and another called redis that Pull the official Redis image from Docker Hub.

To launch the application, simply navigate to the project directory and run "docker compose up"Compose takes care of downloading the Redis image, building your web application image, and starting both services in the correct order.

Once it's up and running, you enter with your browser into http://localhost:8000 (o http://127.0.0.1:8000) and you should see a "Hello World" type message and a Visitor counter that increases each time you reload the page. If you inspect the local images with docker image lsYou'll see something like redis y web created or downloaded.

When you want to stop everything, you can do CTRL+C in the terminal where you left "docker compose up" or execute docker compose down from the project directory. This will stop and remove the containers created by that compose.

Improving workflow: Bind mounts and Compose Watch

Working in development with Docker is more convenient if You don't have to reconstruct the image every time you touch the code. That's where Bind Mounts and, in more recent versions, Docker Compose Watch come into play.

A Bind Mount involves mounting a folder from your machine inside the container. In the compose.yaml file, you add a section to the web service. volumes that maps the project directory to the working directory from the container, for example .:/codeThis way, any changes you make in your editor are instantly reflected within the container.

If you also activate Flask's debug mode with the variable FLASK_DEBUG=1, the command flask run It will automatically reload the application when it detects changes in the files, without needing to stop and restart.

Docker Compose Watch takes it a step further: you can use “docker compose watch” or “docker compose up –watch” This allows Compose to monitor project files and synchronize changes with containers more intelligently. When you save a file, it's copied into the container, and the development server updates the application without restarting the entire container.

Try, for example, changing the welcome message in app.py from "Hello World!" to a phrase like "Hello from Docker"Save the file, refresh your browser, and you'll see the new message instantly while the visit counter continues to run without losing its state.

And when you finish working, as always, you can pull docker compose down to turn off and clean the containers that were underway with that stack.

  How to create a .deb package step by step

Example 2: Full stack app with frontend, backend and database

To see Docker Compose in a somewhat more realistic architecture, imagine a to-do list application (Todo List) with a frontend in Vue.js, an API in Node.js, and a MongoDB database. Each part lives in its own directory and has its own Dockerfile.

In the repository you might find a folder frontend with the Vue app and another backend with the Node server. The backend exposes endpoints for create, list, update and delete tasksand connects to MongoDB to store them. The frontend consumes these endpoints to display and manage the task list in the browser.

The file docker-compose.yml It is located at the root of the project and defines three services: frontend, backend y databaseThe frontend service is built from the Dockerfile in the corresponding folder, usually exposing internal port 80 and mapping it to port 5173 of the host (for example, to use the same URL as in local development).

The backend is built from the Dockerfile in the directory backend, exposes port 3000 (both inside and outside the container, if you want to simplify) and declares a dependency on the database to ensure that MongoDB is available when it starts up.

The service database use the directly Official image of MongoDB and build a volume, let's say mongodb_data, a /data/db, which is where Mongo stores its data. That volume is declared in the top section. volumes from compose, so that the data persists even if you delete and recreate the containers.

Finally, all these services connect through a custom network, for example my-network, defined in the section networksThis allows them to be resolved by service name (the backend can connect to Mongo using the hostname). database) and that the traffic is encapsulated in that isolated network.

With the configuration ready, run docker compose up At the core of the project, it is responsible for build or download the images and launch the three containersYou can check that everything is in place with docker compose ps, then accessing http://localhost:5173 to view the Vue app in your browser and create your first tasks.

Networking in Docker Compose: connecting services to each other

Networks are the layer that allows your containers They "see" each other and speak in a controlled mannerBy default, Docker already creates networks for Compose, but defining them explicitly gives you more clarity and control over what can communicate with what.

It works simply: each service includes a field networks where you indicate which networks it belongs to, and then in the top section networks You define those networks with their configuration. The most common (and recommended in many cases) approach is to use the driver. bridge.

A bridge network creates a private space network for your containerswith automatic DNS resolution based on the service name. That means that, for example, if your database service is called databaseAny other service on the same network can connect using just database as hostname.

In a project with a frontend, backend, and database, you might decide, for example, to create a frontend network and a backend network. The frontend would connect to the backend, and the backend to the database, but the frontend and the database... They wouldn't necessarily have to share a networkreducing the internal exposed surface area.

In code, this translates to something as straightforward as assigning the corresponding network to each service, and then defining those networks with driver bridges. At the application level, the simplest approach is to use the service name as host when configuring connections: of app a databaseFor example, simply by indicating that the database host is "database".

Volumes in Docker Compose: Data persistence

Volumes are the recommended way to persist information generated by the containersas the databasesUser files, backups, etc. They are also used to share data between services within the same stack.

In the section services You can mount volumes directly with volumesBut when you want that volume to be accessible by multiple containers or you want to manage it more explicitly, you also define it in the top section. volumes from compose.yaml.

Imagine you want to set up a backup system for your database. You would have the database service mounting a volume where it stores its data and another service dedicated to backups that Mount that same volume in read mode to perform exports or synchronizations without touching the main container.

Docker allows you to fine-tune the configuration of volumes with more parameters (driver type, specific options for drivers external factors, etc.), but in most cases the most practical thing is to let it happen. Docker manages volumes automatically without going crazy with weird configurations.

The important thing is to be clear about which folders in your services need to be persistent, and to declare them as volumes in Compose so you don't lose data when you recreate containers or update images.

Configs: managing configuration files

The section configs It is designed to manage configuration files of services within your stack, similar to volumes but specifically focused on configuration.

Think of an Apache or Nginx server running on Docker. You'll probably need adjust your configuration file Rebuilding the image every time you modify these files is inefficient and annoying, especially in environments where parameters are frequently adjusted.

  Hostgator. Plans, Prices, Alternatives, Advantages

With settings You can specify in the service that you want apply a specific configuration and then describe it in the section configsThere are several ways to define them, the most common being:

  • fileThe configuration is generated from a local file.
  • external: if it is marked as trueCompose assumes that the configuration already exists and is only referenced.
  • name: Internal name of the config in Docker, useful when combining with external: true.

This way you can update the configuration file on your machine and return to raise the stack without having to rebuild the base image, keeping the image code separate from the environment-specific configuration.

Secrets: credentials and sensitive data

The section secrets solves a classic problemWhere do I store passwords, API keys, and other sensitive information without leaving them scattered throughout the code or YAML?

Just like with configs, secrets can be defined in different waysThe usual thing is:

  • file: the secret is generated from the contents of a file (for example, a text file with a key).
  • environmentThe secret is created using the value of an environment variable on your system.
  • external: indicates that the secret has already been created and only needs to be referenced, useful to avoid overwriting secrets that are managed from outside.
  • name: internal name of the secret, especially relevant when combining external: true with secrets created by another tool.

With secrets you can make containers that need access to these credentials read them in a controlled manner without having to leave them visible in the code repository or in the compose.yaml itself, significantly reinforcing the security of your deployments.

Working with multiple files Compose and includes

In large projects, it's not uncommon for your application to be divided into several services, sometimes managed by different teams. In these cases, it's practical to... separate the configuration into multiple Compose files to better modularize the architecture.

A typical approach is to have a compose.yaml main file for the application and other files for parts of the infrastructure. For example, you can move the definition of Redis or other file support services infra.yaml and keep in the main compose only what directly concerns your app.

To do this you create the file infra.yaml with its own section services where you leave, for example, the complete Redis service. Then, in your compose.yaml main, you add a section include which points to the file infra.yaml.

When you run docker compose up From the project directory, Compose Combine both files and it brings up all the services as if they were in a single YAML, but you still have the logic separated and more organized.

This technique makes it easier for different teams to maintain their own Compose files and for the global application to be assembled using includes, which is very useful in architectures with dozens of containers or environments with a lot of shared infrastructure.

Essential Docker Compose Commands

Although Compose has a good catalog of commands, in day-to-day work most people use a handful of them on a recurring basisIt's important to master them because they are what define your workflow.

The most important is docker compose upThis command builds the necessary images (if they don't already exist), creates the containers, configures networks and volumes, and starts all the services defined in your Compose file. It's the command you use when you want to start your stack.

It is usually combined with the option -d to run it in "detached" modeThat is, in the background. This way you don't fill up the terminal with logs and you can continue using that session for other commands. For example: docker compose up -d.

To stop and clean up what you've lifted, you use docker compose downwhich stops and removes containers, networks, and optionally associated images and volumes. Two very common flags here are --rmi (to delete images) and -v (to remove volumes defined in the section volumes).

If you want to see which containers are part of the project and what their status is, you can run docker compose psThis lists each service, its status (up, exited, etc.), and the exposed ports, which is very useful for verifying that everything is working correctly after a up.

When you launch your stack in detached mode, the logs don't appear in the terminal. To view them, you need to use... docker compose logseither globally or by filtering by service. The flag -f It allows you to track logs in real time, very useful for debug a specific service without needing to access the inside of the container.

Typical workflow: define compose.yaml, execute a docker compose up -d, check with docker compose ps, review logs with docker compose logs -f <servicio> If something goes wrong, and when you're finished, use docker compose down to leave everything clean.

If you ever get lost, docker compose --help It shows you the list of available subcommands and options to help you remember what each thing does without having to go to the documentation.

In light of all the above, key tool For anyone working with containers beyond just individual projects, Compose is a great tool. It allows you to develop directly in an environment very similar to (or identical to) production, control services, networks, and data from a simple YAML file, and avoid a host of compatibility and deployment issues that inevitably arise when working only "locally" without containers. Once you get used to writing a good Compose YAML file for your projects, it's hard to go back.