- Docker Compose allows you to define and orchestrate multi-container applications using a YAML file, simplifying complex deployments.
- Services, networks, and volumes are described declaratively, facilitating data persistence and secure internal communication.
- Commandos like up, down, logs, exec and build cover the complete lifecycle of projects managed with Compose.
- A single docker-compose.yml file makes deployment reproducible both locally and on cloud servers or VPS managed via console or Portainer.
Working with containers has become an everyday occurrence For almost any development team, and when an application moves beyond a simple service and starts incorporating a database, cache, frontend, and other microservices, managing all of that manually becomes a real hassle. This is where Docker Compose comes in, a tool designed precisely so you don't have to start container by container, manually configure networks, or remember every single, lengthy command.
The goal of this article is for you to learn how to deploy complete applications with Docker Compose.This course will cover both local and server-based applications, ensuring you fully understand the function of each part of the docker-compose.yml file, how services, volumes, and networks work, the commands you need daily, and how all of this fits into a real-world context with a Node.js app and a database. If you're coming from a pure development background (frontend, mobile, backend) and concepts like orchestration and operations sound like gibberish, don't worry: we'll go step by step, but without skimping on details.
Basic environment requirements
Before we start deploying anything, you need a minimally prepared base system.A typical and commonly used environment for practicing and for small production projects might look something like this:
- Ubuntu 20.04 as server operating system or local machine.
- Docker 20.10.x installed and working (daemon active).
- Docker Compose 1.29.x or higher to manage multi-container projects.
- Node.js 18.x y NPM 8.x if you are going to build images from a Node application.
It is not mandatory to use these exact versions.but do have something similar and relatively recent. On cloud servers like VPS (Google Cloud, AWS, Arsys, etc.) the usual practice is to set up a VM LinuxInstall Docker and then add Docker Compose on top of that.
What is Docker and what does Docker Compose solve?

Docker is a platform for packaging and running applications in isolated containersEach container includes only what's needed to run your process (binaries, libraries, runtime, etc.), sharing the kernel with the host but without interfering with other services. This eliminates the classic "it works on my machine" dilemma and allows you to move the app between different machines seamlessly.
Docker Compose, on the other hand, is the tool that orchestrates multiple containers as if they were a single application.If you only had a simple service, you could manage with Docker Run, but as soon as you need:
- Un frontend (Angular, React, Vue…)
- Un backend (Node.js, Java, Python…)
- An database (MySQL, PostgreSQL, MongoDB...)
- Perhaps a cache system Redis type
Managing all of this container by container becomes impractical.Compose allows you to define the entire "stack" in a single YAML file, declaring which services exist, how they connect, which ports they expose, which volumes they use, and what environment variables they need.
That file is usually called docker-compose.ymlAlthough it can be called something else if you specify it with -f when running the commands. It works like a recipe: anyone with Docker and Docker Compose can replicate the same container infrastructure on their machine or server.
Installing Docker Compose on different systems

The only prerequisite for installing Docker Compose is having Docker running.From there, the process changes slightly depending on the operating system:
Installation on Linux (example Ubuntu)
On distributions like Ubuntu, you can install Docker Compose by downloading the official binary. and giving it execute permissions. A typical pattern is:
sudo curl -L "https://github.com/docker/compose/releases/download/1.29.2/docker-compose-$(uname -s)-$(uname -m)" \
-o /usr/local/bin/docker-compose
sudo chmod +x /usr/local/bin/docker-compose
Once this is done, run docker-compose --version to verify that the command is available and responds with the correct version.
Installation on macOS
En Mac The easiest way is usually to install Docker Desktopwhich already has Docker Engine and Docker Compose integrated. Alternatively, many developers use Homebrew:
brew install docker-compose
With Docker Desktop, you usually won't have to worry about compositions anymore.because the package itself takes care of keeping it up to date along with the Docker engine.
Installation on Windows
En Windows The most practical solution is to use Docker Desktop for Windows.. Downloads Run the installer from the official Docker website and follow the wizard. This will get both Docker and Compose ready. PowerShell or in WSL2.
Although it is possible to work with Docker on native WindowsFor more serious development environments, it is usually recommended to use WSL2 with a Linux distro or directly a remote Linux VM, where Docker and Compose work closer to production.
Basic structure of a docker-compose.yml

Within the docker-compose.yml file we can define several main blocksThis includes the format version, services, volumes, networks, and, in some cases, additional configurations. Services are essential for deploying a typical app, and volumes and networks are necessary for persistent deployment.
Services: the heart of the app
Each service in the Compose file typically corresponds to a container (or group of containers) that provides a piece of the applicationFor example, a web service, a database service, a caching service, etc. The definition of each service supports a number of properties, among which the following stand out:
- image: Docker image that will be used for the container (for example, nginx:latest, postgres:15, node:18…).
- container_name: Explicit container name (optional; if not defined, Compose generates one).
- build: path to the directory where the Dockerfile is located to build the image if it does not exist or we want a custom one.
- command: command that will be executed when the container starts (overwrites the CMD (from the image).
- ports: port mapping in host:container format, for example "80:80" or "3030:3030".
- volumes: volume mounts (named volumes, bind mounts, etc.).
- environment: environment variables that are injected into the container.
- depends_on: indicates dependencies between services, so that some start before others.
- networksDocker networks to which the service connects.
- restart: restart policy (no, always, on-failure, unless-stopped).
What's interesting is that many of these properties are optional.You could define a service using only `image` and Compose would then be able to start the container with default values. Then, as needed, you can fine-tune ports, networks, variables, etc.
Volumes: Data persistence and sharing
Volumes in Docker are the standard mechanism for persisting data and sharing it between containers or with the host.In Compose, they are generally declared in the "volumes" section at the root level, and then referenced from the services.
A volume can have several relevant properties.:
- driver: volume driver type (default, local).
- driver_opts: specific driver options, where you can specify:
- type: volume type ("volume", "bind", "nfs", etc.).
- device: path on the host you want to mount, in the case of a bind mount.
- o: mounting options (rw, ro, etc.).
- external: if the volume is managed externally (not created by Compose).
- labels: arbitrary labels.
- name: volume name, if you want to customize it.
- scope: volume scope (usually local).
Although you can configure many details, in many cases it is enough to declare the volume name and use it in the service. For example, for a MySQL or PostgreSQL database, it's typical to have a data volume and sometimes a bind mount for initialization scripts.
Networks: communication between containers
When an application has multiple modules, you will usually want to isolate its traffic on an internal network. so that the frontend and backend, or backend and database, can only see each other and are not uncontrollably exposed to the host.
Docker implements networks with a three-layer-inspired model:
- Endpoint: the virtual interface that connects the container to the network.
- Sandbox: the container's isolated network space (its own TCP/IP stack).
- Network: the network that interconnects the different sandboxes through the endpoints.
Amongst the drivers The most common network components in Docker are:
| driver | Ambit | Description |
|---|---|---|
| bridge | Local News | It is the default network on many Docker hosts, creating a virtual bridge on the host and connecting containers to each other. |
| host | Local News | Disable network isolation: the container shares the host's network directly. |
| overlay | Global | It allows you to connect containers running on different Docker hosts within a swarm. |
| macvlan | Global | Assign a MAC address to the container as if it were a physical device on the network. |
| none | Local News | Without Docker-managed network connectivity, for very specific cases. |
In Compose, networks are defined in the "networks" section.This is where you choose the driver and options. If you don't define anything, a bridge network is usually used by default. Services connect to these networks simply by listing the name in their definition.
Simple example: nginx and PostgreSQL with Docker Compose

To put all these concepts into perspective, imagine you want to build a simple website with Nginx and a PostgreSQL databaseA minimalist example of docker-compose.yml might include two services, an internal network, and a couple of volumes:
version: '3.8'
services:
web:
image: nginx:latest
container_name: mi_nginx
ports:
- "80:80"
depends_on:
- db
networks:
- app_net
db:
image: postgres:latest
container_name: mi_postgres
environment:
POSTGRES_PASSWORD: ejemplo_password
volumes:
- datos_db:/var/lib/postgresql/data
networks:
- app_net
volumes:
datos_db:
networks:
app_net:
driver: bridge
Here we see two very important thingsOn the one hand, Nginx exposes port 80 to the host while the database is only accessible within the app_net network, and on the other hand, the PostgreSQL data is persisted on a volume called datos_db. Furthermore, web depends on db, so Compose will attempt to start the database first.
Dependencies between services with depends_on
In a real-world application, there are often relationships of "this service makes no sense without that one".For example, a REST API that needs the database to be running to initialize connections, or a frontend that starts only when the backend responds.
In Docker Compose you can express these relationships with the depends_on key., listing the services on which the current one depends:
services:
api:
image: mi_usuario/mi_api:latest
depends_on:
- db
db:
image: postgres:15
With this configuration, when you run docker-compose up without specifying services,Compose will load the database first, and then the API. Keep in mind, however, that `depends_on` controls the order of operations. BootHowever, it doesn't guarantee 100% that the dependent service is "ready" (for example, that the database is accepting connections). For critical cases, waiting scripts or healthchecks are typically used.
Essential Docker Compose Commands
Once you have the docker-compose.yml file in placeWorking with Compose on a daily basis revolves around a few essential commands. We will always assume that you are in the directory where the file is located, or that you are using the -f option to indicate this.
Start the application: docker-compose up
The main command to deploy the containers is docker-compose upIf you run it without any further parameters, it will attempt to build the necessary images (if they have a build section) and start all the defined services:
docker-compose up
If you want the containers to run in the background, as is usual on servers, add -d:
docker-compose up -d
Stop containers: docker-compose down
To stop and delete project containers (though not necessarily images or volumes) you use:
docker-compose down
You can combine this command with additional options. If you want to delete volumes, custom networks, etc., you can do that too, but in most cases simply using `down` is enough to stop the project in an orderly fashion.
View the status of services: docker-compose ps
If you need to check which services are active, which ports they have mapped, and their status, the command to use is:
docker-compose ps
This will show you a table with the containers managed by that Compose., including columns for name, image, ports and current status, very useful for verifying that everything is as you expected.
Query logs: docker-compose logs
To see what's happening within your services, you can use docker-compose logs.You can view the logs for all services or for a specific one:
docker-compose logs
# Sólo los logs del servicio "api"
docker-compose logs api
If you add the -f option, you will track in real time (similar to tail -f):
docker-compose logs -f api
Entering a container: docker-compose exec
When you need to "get inside" a container to debug or run commandsYou use `docker-compose exec`. For example, to open a shell in a service called `api`:
docker-compose exec api sh
In containers based on bash distributions You can use bash instead of sh, whichever is more comfortable depending on the base image.
Build or rebuild images: docker-compose build and docker-compose pull
If you have modified any Dockerfile or part of the build contextYou will need to reconstruct the associated images:
docker-compose build
# O bien para un servicio concreto
docker-compose build api
When the images come from a remote registry (Docker Hub, private registry…) And if you simply want to download the latest version declared in the YAML, you use:
docker-compose pull
Remember that you can always resort to "normal" Docker commands. (docker ps, docker images, docker volume ls, docker network ls, etc.), but to maintain project consistency it is better to handle everything that affects the services defined in docker-compose.yml via Compose.
Complete example: Node.js + MySQL app with Docker and Docker Compose
Let's now look at a slightly more realistic example.A REST API in Node.js that uses MySQL to store information (for example, car data). The typical flow would be:
- Develop the API using environment variables for configuration.
- Create a Dockerfile for the API.
- Build and, if you want, upload the image to Docker Hub.
- Define a docker-compose.yml with the API and the database.
- Get everything up and running with docker-compose up and try the app.
1. Node.js API prepared for environment variables
Suppose you have a Node project with Express that exposes some endpoints To query, create, and list cars. The key here is not the code itself, but rather that the database connection configuration comes from environment variables such as DB_HOST, DB_USER, DB_PASSWORD, DB_NAME, etc.
This is essential for working well with Docker.You don't want to burn credentials or URLs into the code, but rather parameterize it at the time of deployment, whether locally or in the cloud.
2. Dockerfile for the web application
Inside the app directory, you create a Dockerfile that is responsible for building the imageA basic example could be:
FROM node:18-alpine
WORKDIR /usr/src/app
COPY package*.json ./
RUN npm install --only=production
COPY . .
EXPOSE 3000
CMD ["node", "index.js"]
This Dockerfile is based on an official Node imageInstall dependencies, copy the code, expose the app's port (3000 in this case), and define the startup command. From here, you can build the image locally.
docker build -t mi_usuario/mi-api-coches:latest .
Verify that the image exists using docker images And, if all goes well, you could even run that API in a separate container using Docker Run. But the interesting part will come when we orchestrate it with the database using Compose.
3. Upload the image to a record (optional but very useful)
If you want to be able to deploy your app on another machine (a VM in the cloud, for example)It's very convenient to upload the image to Docker Hub or another registry. To do this:
docker login
# te pedirá usuario y contraseña
docker push mi_usuario/mi-api-coches:latest
If you want to explicitly versionYou can label multiple versions:
docker tag mi_usuario/mi-api-coches:latest mi_usuario/mi-api-coches:v1
docker push mi_usuario/mi-api-coches:v1
On the server where you are going to deployIt will be enough to do a docker pull of the image and use it in the docker-compose.yml, without needing to rebuild it there.
4. Define the docker-compose.yml with API and MySQL
The next step is to create a docker-compose.yml file that combines the API and databasealso adding a volume for the MySQL data and, if needed, a script initialization:
version: '3.8'
services:
web:
image: mi_usuario/mi-api-coches:latest
container_name: api_coches
ports:
- "3000:3000"
environment:
DB_HOST: db
DB_USER: coches_user
DB_PASSWORD: coches_pass
DB_NAME: coches_db
depends_on:
- db
networks:
- coches_net
db:
image: mysql:8
container_name: mysql_coches
environment:
MYSQL_ROOT_PASSWORD: root_pass
MYSQL_DATABASE: coches_db
MYSQL_USER: coches_user
MYSQL_PASSWORD: coches_pass
volumes:
- datos_mysql:/var/lib/mysql
- ./initdb:/docker-entrypoint-initdb.d
ports:
- "3306:3306"
networks:
- coches_net
volumes:
datos_mysql:
networks:
coches_net:
driver: bridge
There are several interesting details in this configuration.:
- The API points to the database using DB_HOST=dbwhich matches the MySQL service name. Docker Compose provides an internal DNS, so you don't need to worry about IPs; just use the service name.
- The ./initdb folder is mounted at /docker-entrypoint-initdb.d in the MySQL containerThis allows you to include .sql or .sh scripts that run automatically on first startup (for example, to create tables, insert sample data, etc.).
- MySQL data is stored in the data_mysql volumeSo if you dump the containers with docker-compose down, the database information remains intact.
- Both services share the coches_net network.which acts as an isolated internal network. To the outside you only expose the ports you want (3000 for the API, 3306 if you need to access the database from outside, which sometimes isn't even necessary).
5. Deploy and test the application
With the docker-compose.yml ready, deploying it is as simple as launching from the file directory:
docker-compose up -d
The first time will take a little longer Because it needs to download the MySQL images (and the API if it's not local), create the volume, and run the initialization scripts. Then, to check the status:
docker-compose ps
If everything is "Up" you can start making requests to the API from curl or Postman. For example:
# Listar todos los coches
curl http://localhost:3000/coches
# Obtener un coche concreto
curl http://localhost:3000/coches/1
# Crear un coche vía POST
curl -X POST http://localhost:3000/coches \
-H "Content-Type: application/json" \
-d '{"marca": "Seat", "modelo": "León"}'
When you want to shut down the deployment, simply run `docker-compose down`.If you do not erase the volume, the data will be retained for the next boot.
Deploying Docker Compose on cloud servers and Portainer
Everything we've seen applies equally to deploying on your own laptop as on a cloud server.The difference is basically where you run the docker-compose up commands and how you open the ports to the outside.
A very simple approach for personal projects or side projects It involves creating a small VM (for example, a free e2-micro on Google Cloud), installing Docker and Docker Compose, cloning your repository with the code and the docker-compose.yml file, and launching the app there.
The only thing you need to keep in mind is the provider's firewall policy.If your app listens on port 3000, you need to open that port in your ISP's network settings (or use a reverse proxy on ports 80/443 if you want to be extra secure with HTTPS). Once the port is open, you can access it using http://SERVER_IP:3000 from any browser.
If managing containers via the console is too cumbersome for you, you can use Portainer.Portainer is a tool that also runs in a container and provides a web interface for managing Docker (and Docker Compose). To set up Portainer on a server, you would simply need to do something like:
docker volume create portainer_data
docker run -d \
-p 8000:8000 -p 9000:9000 \
--name=portainer \
--restart=always \
-v /var/run/docker.sock:/var/run/docker.sock \
-v portainer_data:/data \
portainer/portainer
Then you would access Portainer at http://SERVER_IP:9000You would create an administrator user and from there you could view containers, Compose stacks, networks, volumes, etc., all from the browser.
Working with Docker Compose allows you to encapsulate the entire architecture of an application in a single file.This makes it easy for any developer, from their laptop or a remote VM, to launch the same stack with a single command. Adding best practices like using environment variables, internal networks, persistent volumes, and, if needed, tools like Portainer, you have a solid foundation for deploying everything from small personal projects to quite serious environments without getting lost.
Passionate writer about the world of bytes and technology in general. I love sharing my knowledge through writing, and that's what I'll do on this blog, show you all the most interesting things about gadgets, software, hardware, tech trends, and more. My goal is to help you navigate the digital world in a simple and entertaining way.