Docker everything
Introducing Docker’s features through a real life example
We often need to set up or deploy an application (installing dependencies, dealing with mismatched versions, etc.), and then we need to replicate that work into another server. This situation will without doubt annoy us unless we create a sort of agnostic recipe to deal with that problem.
Well, this is where Docker’s magic comes to rescue!
To show some of Docker’s interesting features, we will use a simple example. Let’s now suppose that we need to install two Apache Web Servers using different source folders.
Installing the engine
Installing Docker engine is quite easy, we can follow the steps for our specific platform here.
Docker components
Creating the recipe (dockerfile):
Docker.io allows us to create isolated environments (containers) using consolidated recipes (images). To achieve that, it lets us create a file called Dockerfile (the recipe itself), where we can define both, the static steps and the steps to be run when the recipe is executed (dynamic steps).
Static steps:
This is the reusable part of the recipe, and each step will be consolidated into a binary file to avoid repeating the process and to prevent consistency problems. This means that the image will create a list of binary files, so if we decide to change the last step, only that step will be discarded and processed again.
Static steps include:
Choosing a base environment or recipe.
Deciding which application or applications will be needed in the environment. Basically… the purpose of the isolated environment
Setting up the communication exposure with the environment, ie which ports we need to expose, which directories, variables, etc.
Adding helper, config or asset files that will be copied at the build time, and available at running time.
Dynamic steps:
Steps to be run when the recipe is running. It will be executed on the created environment.
Our recipe sample
Next you can find our Apache sample Dockerfile:
And our init.sh saved into an arbitrary named bin folder:
Building the image
Once we have created the Dockerfile, and the helper files (init.sh), we could build the image (consolidated into binary files). This could be done with the build command as follows:
$ sudo docker build -t <tag name>:<version name (optional)> <path to Dockerfile>
E.g.
$ sudo docker build -t apache_mold:1.0 .
Now the image will be built and available to be run whenever we like.
Running the image
To run the image we just need to execute this command:
$ sudo docker run
E.g.
$ sudo docker run --name firstApache -d -v /opt/sites/first.com/www:/var/www -v /opt/sites/first.com/siteconf:/etc/apache2/sites-enabled -p 81:80 apache_mold
The container hash unique identifier will be returned:
81ab67173f0d9c3e7379808ad94c69f6bf133623113862482ac358890f25035a
And that image could be instantiated as many times as desired:
$ sudo docker run --name secondApache -d -v /opt/sites/second.com/www:/var/www -v /opt/sites/second.com/siteconf:/etc/apache2/sites-enabled -p 82:80 apache_mold
Now that all the instances are running, let’s check them:
Listing images
To list the status of the images execute:
$ sudo docker ps -a
There we will find a lot of interesting info: CONTAINER ID: unique hash to identify the container. If no name is provided, this id is really useful. IMAGE: the recipe used to create the container COMMAND: the last command executed in the recipe CREATED: time since the container was created STATUS: time since the container is up or down PORTS: port mapping, the container opened ports, such as the 443 and 80, and the ones binded to the host such as the 80 with 81 and 82 NAME*: unique name given to the container at the time of creation.
If specific details of a given container are needed, we can use:
$ sudo docker inspect <container id | container name>
There we will find a bunch of info related to the container, such as the networking configuration, port’s mapping, directory mapping, etc.
Accessing a container
If we need to access a specific container, we can do it quite easily, just by executing this command:
$ sudo docker exec -ti <container id | container name> bash
E.g.
$ sudo docker exec -ti firstApache bash
Stopping a container
If we are no longer using a container, we can stop it using:
$ sudo docker stop <container id | container name>
E.g.
$ sudo docker stop firstApache
If the container does not respond, we can optionally use the kill command with the same syntax.
Deleting a container
Once the container is deleted, and we will not use it any more, we can remove the container to release resources by executing:
$ sudo docker rm <container id | container name>
E.g.
$ sudo docker rm firstApache
Conclusion
Given the fact that Docker provides really lightweight isolated environments, it sounds like the best option whether to test or to deploy production environments since we can version recipes, avoid dependencies problems, standardize the deployment process and much more. In future posts, we’ll show how to link containers, how to commit and collaborate, and many other fantastic features! Have fun with Docker!
Extra bonus:
Some useful commands to keep in mind: