Skip to content

docker

Understanding the ITD (-itd) of docker

This is the most common thing we come across when playing with docker commands. i.e options -i -t and -d. Are you confident with what does it really mean in docker world. Do you know how does it really work? Let’s learn what is it exactly with a simplest example.

Meaning of the each options

  • -i : it means interactive mode
  • -t: tty mode
  • -d: daemon mode

Let us make an assumption that we have a running docker container called simple_test_container and is a linux machine with ubuntu as base image.

Now lets see what else this command will do when we try to execute a exec command on it.

Scenario 1. Use case of -i


docker exec -i simple_test_container /bin/bash

The use of exec command is to get into the running container. But you will not be able to get into the container because you are in interactive mode. You can just interact with the docker container. The only things you can do here is pass the command that you want to run on the container.

Scenario 2 Use case of -i

Now , run this instead to interact with the container

docker exec -i simple_test_container ls

Now you get the list of items shown as the result. You have successfully interacted with the running container.

Scenario 3 Use case of -t

Suppose we want to get into the container than we use the option -t

docker exec -t simple_test_container bin/bash

The problem, you will face here is though we will successfully get into the running docker container but we will not be able to run any command because we don’t have the -i option enabled that allow us to interact with the container machine.

Scenario 4 Use case of -it

To solve the problem in scenario 3 we would use both option -it which will allow us to get into the running docker container and than interact too.

docker exec -it simple_test_container /bin/bash

The problem, you will face here is once you come out (exit) of the container the container will stop.

Scenario 5 use case of -itd

To solve the issue in scenario 4 we will make use of the -d option which will run the service in daemon mode (background). Now though we come out of the container it will not get halted, will still continue running in background.

Summing up use case of docker -i -t -d options

  • -i : to just interact with the docker container
  • -d: to run the container in daemon mode (background)
  • -t : to run or get into the running container or we call it TTY mode
  • -it : allows to get into TTY and interact
  • -id: allows interaction only and runs the container in background
  • -itd : allows TTY , allows interaction and runs the container in background

 

 

landing into docker multistage build

When you browse through official docs of docker or few of the videos on youtube. You will find, a single answers from all, multistage will all reduce the complexity and reduce the docker image size. Yes, that is true. Today, I will be talking about what made me utilize the docker multi stage in my Dockerfile.

How many of you have gone through this? The script or codes that were working last time suddenly crashes the next day. I had a Dockerfile build with alpine as base image. It was working like a charm. After running the same Dockerfile now, It was throwing errors on installing the composer. I know, installing the composer is just one line to add but every effort I put on was into vein. The only option left for me was to add composer separately or use already successfully running composer.

I tried it and was working like a charm again. So initially I deleted the line which was to add composer in my Dockerfile: which was a main culprit. Than added the composer in the first line of the Dockerfile as shown bellow.

FROM composer:1.5.1 AS my_composer

This line will simply add the docker image with composer. I added this because this had the composer installed in alpine os.

Now time to add my real alpine image that I was using and that had the trouble.

FROM alpine:3.8

So now we have a Two FROM here, which is why we call it multistage docker.

Now time to add the composer from first image to the second. Which can be easily done as bellow.

COPY --from=my_composer /usr/bin/composer  /usr/bin/composer

This will copy the /usr/bin/composer from first composer image to the second alpine image. We use –from to specify from where to copy. The first argument is the source and second being the destination.

So I build the image again. It worked flawless. But most of you might have the issue like php not found. This is because the php path is not set in the second image so need to set it. I just added a symlink for the solution as follow.

RUN ls -s /usr/bin/php7  /usr/bin/php

Though, this article does not describe about how to use multistage docker. I hope, putting my real experience might have even helped better to realize when do we really need it. In future I will try to put on some easy materials too, regarding how to add or write multi stage docker file.

Running the commands on running docker container using exec

Many time, you may land into the situations where you need to execute some commands on the docker container which is already running. We normally do so because we don’t want to stop the container ,make changes and start over. Instead we can simply pass the commands we want ,to the running container.

Using the docker EXEC command:

Docker has the special command exec for the task. The syntax for using it is the following:

docker exec -i <container-name> <bash> <<< command>

You are reading this means you must be familiar with docker exec command. Normally we use to get into docker container. But here we have just use -i options as we don’t want to get into the container ,but simply interact to it. The command to execute must be passed after <<<

Example: docker exec -i wordpress_contaner bash  <<< ls -al

This command will get into the docker container called wordpress_container and show us all the file locations of the locations where it gets on entering.

Passing the command file to exec command:

Instead of passing the commands after <<< we can also pass the file with the command to execute. The syntax for that ,changes a bit. Instead of <<< we use single less than symbol.

Example of exec with file:

Here, in this example we will try to run the update commands on the mysql running container. We will not pass the update command directly but instead create a file where the sql command will be written and pass the file-source to the command.

SITE_URL="https://staging.com"
echo "UPDATE wp_options SET option_value = \"$SITE_URL\" WHERE option_name = \"siteurl\"" > /tmp/updater.sql

docker exec -i $container_name mysql -uuser -ppassword dbname < /tmp/updater.sql

why doing so ?

Many time you may land into errors when directly passing the commands over exec so in such scenario, you may trying adding the command to file and pass that after < symbol as in the above example.

 

 

docker external network,connecting to docker external network,composer

The purpose of the this post is to learn how to connect a docker service to the existing or already created docker network. We will be creating a docker network and write the docker-compose.yml file with a service connecting to external network.

Lets create one docker network with overlay driver. For simplicity you may opt not to not use the driver and subnet option that I am using here in network creation.

docker network create --driver overlay --subnet10.0.9.0/24 myexternal-network

Now lets crate a docker docker-compose.yml file and add the external network to it.

docker-compose.yml

version:"3.4"
networks:
     myexternal-network:
         external: true

services:
   myapp:
       image: imagename
       networks:
           - myexternal-network

So in the networks we usually define a network but here we added a network and represented it as external. adding the network to services are the same like we do for other networks.

Note:
As we are using the external network it must be already available before running the docker service else will throw warning message.

Creating a wordpress development environment using docker -compose in 1 minute

Yes it takes no more than 1 minute to setup WordPress  development environment in docker  container. Here we will be using the docker-compose.yml file which can be easily found even in the docker officials  documentations. And we will run the simple docker-compose command to run the service and run the WordPress.

Prerequisites :

Basic knowledge and understanding of  docker and docker-compose.

Here is the link to from where I got the contents for docker-compose file .

https://docs.docker.com/compose/wordpress/ I  have added few descriptions to these docker-compose for someone who might be unfamiliar with it.

docker compose file for wordpress

We are  using two  images here

  1. WordPress image : Behind the scene its maintaining php and web servers required. And using these image you don’t even have to bother about it.
  2. MYSQL image: The mysql image is used for the database.

What is done with docker-compose can also be done by creating your own Dockerfile or by running both service and than linking them but that is more time consuming and becomes frustrating when the number of containers to run grows .With docker-compose everything becomes easy . The best part I like being linking the multiple container together with ease.

How is the 2 containers linked here:

As the service  WordPress is dependent on database service db. You can see a line with  depends_on and with the service name allocated ,which is responsible for linking the two services  we have  together.

Command used to start containers:

As it has the docker compose  file we  will be using the docker-compose command to start the services.

docker compose up -d 

These command will run the two service and -d for running it in the background.

Here is the short video on how I ran the services

 

Creating your own private docker registry

The best thing about  the docker is its centralized docker registry. We can push and  pull the images  we require from it easily. But what if we want our own registry for keeping all the docker images we build. Keeping the images in self made docker  registry will make it easy to manage and faster too in some cases and will be totally on yours  control.

steps to create docker private registry:

There are some other ways to create docker private registry but we will follow what is also being clearly mentioned in the officials docker documentations page. The thing  to understand  is we will be running a docker container which we can use as a private registry. The cool thing here is there  is another image available to create private registry. We run that image and create a registry container and keep it private with some authentication set. The name of the image is registry:2.

Start the docker container as you normally do:

$ docker run -d -p 5000:5000 --restart=always --name privateregistry registry:2

So the container will be running in a daemon mode and the name is privateregistry and is running locally in 5000 ports. Check with docker ps command to verify if registry:2 is running or not.

how to push images to private registry just made:

So our private  registry docker container is up and running now we will push one image to it. For the sake of example we will simply pull alpine image change its tag and push to our private registry.

lets pull alpine image

docker pull alpine

we pulled the alpine image from official docker repo. Now we will make it ready  to push to our local repository.


docker tag alpine localhost:5000/private-alpine

So here, we added new tag to the alpine image we  recently pulled. The format to add the tag is hostname : port / imagename . So when pushing the image the docker  will understand that it has to be pushed the particular hosts and ports.

Now we are totally ready to push our  docker image to the local  repository.

docker push localhost:5000/private-alpine

Pulling from the private registry

docker pull localhost:5000/private-alpine

The format for pulling is just the same as you pull  from docker registry ,the only difference you will find is the image tag that has to be maintained to pushed to your private repository.

creating docker private registry guide

Watch the short video shows how it works:

The video is on progress and will be posted shortly:

Final Note on docker private registry:

We did this to demonstrate how to create your own docker private registry but for security purpose you have to implement some authentications  so that only authorized users  can make the entry and fetch from the registry build.

A quick guide to docker container creation

lets create a docker container with Ubuntu operating system. You can choose the OS of your choice. We choose to use docker over virtual box because docker container are too lightweight compare to virtual box create with vagrant. Follow the following easy steps and you are done

1. Pull the image:

we need the image of the operating system we will use in our container. Its just like installing a windows or Ubuntu operating system in your system. Consider your laptop as a container and the operating system as an image. To get the image go to dockerhub and search for the image you want or simply use this command. If you are confused about what command to use. The command will also be provided in the docker hub image descriptions page.

docker pull ubuntu

2. Create a container

It is going to take some time for the image file to download. After it is downloaded create a container with the bellow docker command. The command will created a docker container with ubuntu OS.

docker run -it ubuntu

And the docker container with Ubuntu is ready to run.

3. Get container list and start the one you want

To get the containers list use the bellow command

docker ps -a

This will show the dockers containers with some details like ID, Names and other details for now we will just require ID or the Names to start the container

Lets start the container

docker start <container_Name or Container_ID>

We have started the docker container to check the running container just type

docker ps

in step 3 we used -a to get all the containers list but not using it will give the running containers.

 

Get into the created container:

Now everything is ready . What everyone would expect or worry about is how to get into that container. Use the following command, which will take you to the respective container and open bash shell.

docker exec -it <container_id or Name> /bin/bash

Some Reminder:

Run the command as root user or use sudo else you may run into some error message.

conclusion:

Creating a docker container is very easy and take fewer time than vagrant. If there is any problem or doubts or got some error please comment bellow. In future I will also try to write or talk about why use the docker over vagrant