Writing a Docker Compose file

A Docker Compose file defines everything about an application. The services, volumes, networks, and dependencies can all be defined in one place. The configuration can be used locally to stand up a development environment, plugged into an existing continuous integration or continuous deployment system, or even used on a server to start production services. Later chapters will show better ways to manage running services.

Docker Compose looks for a file named docker-compose.yml in the current directory. An alternate file can be specified with the -f option. The file is formatted in YAML so it can be edited in any text editor.

Extending compose files

We have already talked about using environment files to change your application’s behavior. Another way that you can do that is to use multiple Docker Compose files. Each file builds on the previous one, extending it and possibly overriding options. This can give you great flexibility in developing your application or adjusting it to fit different environments. This can be done in one of the following two ways:

  • The first way is to put your overrides in a file named docker-compose.override.yml. This file is read automatically by docker-compose. The options in docker-compose.yml are applied first and then the options in docker-compose.override.yml are applied.
  • The second option is to use the -f flag to docker-compose. You may include as many compose files as you want. They will be applied in order from left to right. For example, in the docker-compose -f docker-compose.yml -f docker-compose-prod.yml command, docker-compose.yml is read first then docker-compose-prod.yml.

Let’s go back to the sample web application. The default docker-compose.yml defined an entrypoint for the web service which uses the Shotgun loader for Plack. That is a great tool for debugging but it is slow because it recompiles the application on every page load. To put this into production, something that has better performance is needed. Instead, let’s use the preforking loader, Starman.

Here is the relevant portion of the docker-compose.yml file:

Now create a file named docker-compose-prod.yml that overrides entrypoint:

Start the application with docker-compose:


Using Docker networks

Docker supports the concept of virtual networks. Docker Compose uses them to provide network isolation between applications. By default, every application started with Docker Compose has its own virtual network named after the application’s namespace. That means, if you start two different applications with docker-compose, they will not be able to see each other.

Within an application’s namespace, the network created by docker-compose provides DNS service discovery. In the case of the example multi-container application, there were DNS entries created for the web and database services called web and db, respectively. In the configuration for the web application (MyApp/, the database connection is configured to connect to a host named db. Docker does the rest by making sure that the db hostname points to the db service.

Keeping your data safe in volumes

Adding data directly to the container image works very well for applications that you do not want to change while the applications are running. Your application code usually falls into this category. For tasks that will be updating data regularly, it is better to put the data on a volume. Volumes also provide a measure of data persistence since they stick around even after a container is removed.

Volumes are one of the keys to orchestration. Putting container data in an external volume helps to facilitate container updates and running containers across multiple hosts. Using networked, shared storage is covered in Chapter 3 , Cluster Building Blocks – Registry, Overlay Networks, and Shared Storage.

Let’s go back to the web database example. The way it is configured at the beginning of the chapter, the database is created every time the application starts and destroyed when the application stops. That works for an example, but it is not very practical in real life. Instead, let’s reconfigure the application to put the database on a volume:


Example: (NGINX)


Debugging containers

Often in general work with containers, you will likely have to figure out what is going on with a container that is running, but docker ps is not good enough to provide you with all the information you need to figure things out. For these cases, the first command to use is docker logs. This command displays any output that the container has emitted, including both stdout and stderr streams. For the following logs, I started the same NGINX container from before and accessed its hosted page on localhost:

Seeing what the container sees

When logs aren’t really enough to figure things out, the command to use is docker exec in order to execute a command on the running container that can include access to a full-blown shell:

Build docker image


Our first new directive here is LABEL:

LABEL <key>=<value> or LABEL <key> <value> is used to add metadata about the image that is being built, which can later be examined and filtered by docker ps and docker images using something like docker images --filter "<key>=<value>". Keys are generally all lowercase in the reverse-dns notation, but you can use anything you want here and version should be present on every image, so we use the top-level version key name. However, the version here is not only there so that we can filter images but also to break Docker’s cache if we change it. Without cache-busting of this sort or through the manually set flag during builds ( docker build --no-cache ), Docker will keep reusing the cache all the way up to the most recently changed directive or files so there is a high probability that your container will stay stuck in a frozen package configuration. This condition may or may not be what you want, but just in case you have automated build tooling, adding a version layer that can break the cache whenever you change it makes the container very easy to update.

Docker commands

Here are all of the commands we covered for Docker with a few others added, which you might use if you build containers frequently:

Dockerfile commands

The following list is a similar one, but this time, we are covering the commands you can use in a Dockerfile, and we’ve arranged it in an order similar to the one you would use when working within the Dockerfile:

Sample server application infrastructure:

Xây dựng image thủ công:

Chạy container:

Setting up a Docker Swarm cluster

Since all the functionality to set up a Docker Swarm cluster is already included in the Docker installation, this is actually a really easy thing to do. Let’s see what commands we have available to us:

Deploying services

Most of the commands we will initially need are accessible through the docker services command:

Deploying it all

As we did for our simple web server, we will begin by creating another Swarm cluster:

Then, we need to create our overlay network for the service-discovery hostname resolution to work. You don’t need to know much about this other than it creates an isolated network that we will add all the services to:

Finally, we will build and launch our containers:

If you are having trouble with getting these services up and running, you can check the logs with docker service logs <service_name> in order to figure out what went wrong. You can also use docker logs <container_id> if a specific container is having trouble.

The Docker stack

As it was pretty obvious from just a few paragraphs before, a manual setup of these services seems somewhat of a pain, so here we introduce a new tool that can help us do this much easier: Docker Stack. This tool uses a YAML file to get things to deploy all the services easily and repeatedly.

First we will clean up our old exercise before trying to use Docker stack configuration:

Now we can write our YAML configuration file–you can easily notice the parallels that the CLI has to this configuration file:



Previous articleReact Native
Next articleDocker for Developers (PHP)