Complete Guide to Integrating Docker with Kubernetes: Concepts, Examples, and Best Practices

Last update: 08/05/2025
Author Isaac
  • Learn the essential concepts and differences between Docker and Kubernetes.
  • Learn how to migrate Docker Compose projects to Kubernetes using Kompose.
  • Explore practical examples, advanced strategies, and tools for integration.

Docker and Kubernetes Integration

Container technology Containers are the cornerstone of application modernization. They allow you to encapsulate applications along with all their dependencies into a portable unit that can run uniformly in any environment, whether on-premises, cloud, or hybrid. Containers isolate applications from the host operating system and other containers, using features inherent to the container kernel. Linux such as namespaces and cgroups. For a deeper dive into how to define repeatable configurations, you can also check out our article on What is YAML and how to use it in Kubernetes.

Among its advantages are: Isolation, portability, lightweight, and modularity. These characteristics, along with the ability to launch instances quickly and maintain environmental consistency, make them ideal for the full development-to-production lifecycle. Popular container technologies include Docker, Podman, Kubernetes (as an orchestrator), and CRI-O.

What is Docker? Fundamentals and Features

Docker Docker is the most widely used containerization platform, specializing in packaging applications and all their dependencies into what we call containers. Thanks to Docker, developers can ensure their application works the same regardless of the environment, eliminating the issues of disparity between development and production.

An Docker image It is a template that includes the operating system, source code, libraries, and necessary configurations. These images are created through a Dockerfile, or automatically in some modern frameworks like Spring Boot using specific plugins. When executed, an image becomes a contenedor, a lightweight, isolated instance with everything needed to run the application.

Docker Advantages: portability, deployment automation, version control, speed of Boot, modularity and an active community that contributes to further improving the ecosystem.

Kubernetes: Orchestrating Containers at Scale

Kubernetes, abbreviated as K8s, is the leading container orchestrator. It automates the deployment, management, scaling, and operation of containerized applications. It was created by Google and is now an open source project maintained by the CNCF and a global community.

Unlike Docker, which handles containerization alone, Kubernetes manages multiple containers spread across multiple nodes, providing high availability, scalability, automation, self-healing, and load balancing.

Its key components include:

  • Pods: The minimum unit of deployment, which can contain one or more containers sharing a network and storage.
  • Services: They provide stable, abstract access to pods, managing traffic through network policies and load balancing.
  • Deployments: They manage the lifecycle of pods, ensuring the desired number of replicas is maintained and facilitating upgrades or rollbacks.
  • ConfigMaps and Secrets: They store configuration and sensitive data, allowing for dynamic injection into pods.
  • Namespaces: They allow the cluster to be divided into logical environments to organize resources and isolate projects or teams.
  How to change the password expiration date in Windows 11

Docker Kubernetes Orchestration

Comparison: Docker Compose vs Kubernetes

Docker Compose y Kubernetes They solve different, yet related, needs. Docker Compose simplifies the definition and management of multi-container applications in local or development environments, using docker-compose.yml. It allows you to launch multiple services, networks, and volumes with a single command, facilitating testing and collaboration.

Kubernetes It goes much further: it's designed for large-scale production deployments, orchestrating distributed containers across multiple nodes, managing high availability, auto-scaling, rescheduling downed pods, and much more. Many teams start with Compose and migrate to Kubernetes as complexity or the need to scale increases.

Transitioning from Docker Compose to Kubernetes with Kompose

For teams that already have applications defined in Docker Compose, migrating to Kubernetes can seem complex. This is where Compose takes center stage: it is a line of action tool commands designed to automatically convert files docker-compose.yml in Kubernetes manifests (YAML), such as deployments y services, speeding up the transition and reducing manual errors.

The process with Kompose is simple and consists of four basic steps:

  1. Installation: Download the binary from the official repository and place it in your PATH.
  2. Preparing docker-compose.yml: Review and update your file to reflect your desired settings (services, networks, volumes, etc.).
  3. Conversion: Run kompose convert to generate the Kubernetes manifests.
  4. Deployment: Use kubectl apply -f to bring resources to the Kubernetes cluster.

This tool allows you to leverage previous work in Compose and accelerate it in more advanced production environments.

Hands-on Integration: Building and Deploying Docker Images on Kubernetes

The typical deployment workflow involves these steps:

  • Image construction: Use tools like Dockerfile to build your image from code and dependencies.
  • Labeling and push: Upload the image to a container registry such as Docker Hub, Google Container Registry, or Azure Container Registry, so that Kubernetes can access it.
  • Creating the Kubernetes cluster: It can be done locally (with Minikube or Docker Desktop), in the cloud (GKE, EKS, AKS), or on-premise.
  • Deployment: Define your manifestos YAML (Deployment, Service, ConfigMap, Secret, etc.) and deploy the application using kubectl apply -f.
  • Service presentation: Create a Service (ClusterIP, NodePort, or LoadBalancer) to make pods accessible, either from within the cluster or from outside.
  • Monitoring and testing: Validate the deployment by accessing the exposed endpoint and using tools such as kubectl get pods to check the status of the deployment.
  Fix: "Please insert Windows recovery media or installation media" error

Advanced Example: Spring Boot Application Integration

One of the most requested integrations is that of Java applications with Spring Boot. There are tools such as Buildpacks Package and Maven plugins that allow you to create Docker images automatically without having to write a Dockerfile. Simply running ./mvnw spring-boot:build-image -Dspring-boot.build-image.imageName=miapp:latest The image is obtained ready to be taken to production, complying with best practices and ensuring compatibility.

A next step is integration with ConfigMaps and Secrets to manage configuration in the cluster and facilitate integration with Docker Compose in development environments.

Kubernetes Manifests: Deployment, Service, ConfigMap, Secret, Namespace

Kubernetes manages resources using files manifest format YAML (or JSON). Each resource serves a specific function, and it's important to understand its structure and purpose:

  • deployment: Manages the pod lifecycle and maintains the desired number of replicas, while facilitating zero-downtime upgrades.
  • Service: Allows access, discovery, and traffic balancing to pods.
  • ConfigMap: Stores non-sensitive configuration that can be injected as environment variables or mounted as files.
  • Secret: Stores sensitive data such as passwords or certificates, encoded in base64.
  • namespace: Segment and organize resources within the cluster for better multi-team or multi-project management.

For advanced integration, access rules are defined using Role y RoleBinding, ensuring that only authorized users or services can modify or read certain resources within the namespaces.

Health Checks and Monitoring with Spring Boot Actuator

In real-life environments, the monitoring is essential. Spring Boot Actuator, for example, exposes HTTP endpoints (/actuator/health, /actuator/ready) that can be used by Kubernetes to determine if pods are ready to receive traffic (readiness Probe) or if there is a serious failure that requires restarting the container (liveness Probe).

Configuring these probes in deployment manifests is key to achieving resilient, self-managing applications that can recover from errors without manual intervention.

Docker Integration Strategies for Kubernetes

For advanced scenarios, it is common to encounter situations where you need to execute Docker inside a pod, for example for CI/CD pipelines like Jenkins. There are three main strategies:

  • Docker in Docker (DinD): Install Docker Engine inside the container itself. It's functional, but it poses security risks, storage compatibility issues, and is not recommended for production.
  • Docker out of Docker (DooD): The application accesses the host node's Docker daemon by mounting its socket. This improves performance and prevents daemon duplication, but still poses security and resource management risks beyond Kubernetes' control.
  • Docker Sidecar: Deploy an additional container within the pod that acts as Docker Engine and exposes its socket over the local network, adding authorization plugins to mitigate risks. While this reduces some issues, it still requires special privileges and care.
  What are the accelerometer and gyroscope used for in modern graphics cards?

Recommendation: Whenever possible, use tools daemonless to build images, avoiding the problems and risks of previous solutions.

Installing Docker and Kubernetes: Windows and Linux

En Windows, the easiest way is using DockerDesktop, which allows you to enable Kubernetes as a local development environment. The process is:

  • Download and install Docker Desktop from the official website.
  • Enable the Kubernetes option in the configuration.
  • Use kubectl (comes integrated) to manage resources.

For production environments, or when more control is required, you can install minicube or directly in Virtual machines or bare metal.

On Linux (e.g. Ubuntu 22.04):

  • Update the system with sudo apt-get update && sudo apt-get upgrade.
  • Install Docker with sudo apt-get install docker-ce docker-ce-cli containerd.io docker-compose-plugin.
  • Install Kubernetes by adding the official repository and running sudo apt-get install -y kubeadm.
  • Initialize the cluster with sudo kubeadm init --pod-network-cidr 10.244.0.0/16.
  • setup kubectl copying the configuration file to $HOME/.kube/config and adjusting permissions.
yaml
Related article:
YAML: What it is, how it works, and what it's used for