Skip to content


Understanding the ITD (-itd) of docker

This is the most common thing we come across when playing with docker commands. i.e options -i -t and -d. Are you confident with what does it really mean in docker world. Do you know how does it really work? Let’s learn what is it exactly with a simplest example.

Meaning of the each options

  • -i : it means interactive mode
  • -t: tty mode
  • -d: daemon mode

Let us make an assumption that we have a running docker container called simple_test_container and is a linux machine with ubuntu as base image.

Now lets see what else this command will do when we try to execute a exec command on it.

Scenario 1. Use case of -i

docker exec -i simple_test_container /bin/bash

The use of exec command is to get into the running container. But you will not be able to get into the container because you are in interactive mode. You can just interact with the docker container. The only things you can do here is pass the command that you want to run on the container.

Scenario 2 Use case of -i

Now , run this instead to interact with the container

docker exec -i simple_test_container ls

Now you get the list of items shown as the result. You have successfully interacted with the running container.

Scenario 3 Use case of -t

Suppose we want to get into the container than we use the option -t

docker exec -t simple_test_container bin/bash

The problem, you will face here is though we will successfully get into the running docker container but we will not be able to run any command because we don’t have the -i option enabled that allow us to interact with the container machine.

Scenario 4 Use case of -it

To solve the problem in scenario 3 we would use both option -it which will allow us to get into the running docker container and than interact too.

docker exec -it simple_test_container /bin/bash

The problem, you will face here is once you come out (exit) of the container the container will stop.

Scenario 5 use case of -itd

To solve the issue in scenario 4 we will make use of the -d option which will run the service in daemon mode (background). Now though we come out of the container it will not get halted, will still continue running in background.

Summing up use case of docker -i -t -d options

  • -i : to just interact with the docker container
  • -d: to run the container in daemon mode (background)
  • -t : to run or get into the running container or we call it TTY mode
  • -it : allows to get into TTY and interact
  • -id: allows interaction only and runs the container in background
  • -itd : allows TTY , allows interaction and runs the container in background



Way to debug in ansible using register and debug

# tasks file for testing 

- name: check if the container is running
  shell: docker ps | grep "my_mysql"
  register: result
  failed_when: "result.rc == 2"
  check_mode: no

- debug:

- name: check value
  shell: echo "Container is not Running"
  when: result.stdout == ""

The ansible playbook above is what I normally do when it comes to debugging or running the playbook based on some condition match.

Lets write a playbook to check if the container is running or not and we will run the job only when it is running.

  • In line number 5 we are simply running bash script to check if the container with that name is running or  not.
  • In line number 6 we register the variable. This is how we store the result of the command in ansible.
  • The purpose of line number 7 8 is to ignore the error. Suppose if such container is not found in line number 5. Though we expect it to register blank value in our variable. It does not do so and terminate so we are forcing it to ignore the error. On success it returns 0 and 1 on error. So we are making it compare with 2 that will not ever happen and making the playbook not fail.
  • Use debug to check what exactly is there in our registered value. Normally it will showcase everything, here we just want to check output so specifying its key called stdout.
  • Use when to work as a conditional statement in ansible. It is like if condition in any programming language. We want to run the shell command above it only when the value registered output is empty.




2 initial ansible setup to make it run smoothly

After installing the ansible. Normally these was the two basic setup that made me stuck my ansible  playbook. Here, I have tried to sum up, what  are the issues that appears and how can we resolve it.

The ansible stuck in gathering facts:

when we first try to login to a instance for the first time we get a message like

Are you sure you want to continue connecting (yes/no)?

One way  to get  rid of it is manually  login to the instance and go through this process manually. But we are here for automations. So to get rid of it, we simply need to make one setting in ansible config files.

Ansible config file is available in this location /etc/ansible/ansible.cfg Here just need to uncomment this default setting, that is commented by default.

# uncomment this to disable SSH key host checking
host_key_checking = False

Ansible playbook can ssh

There  might be multiple reason for it but lets see what else it can be?

  1. Not added ssh public key:
    Make sure you  have added the ansible public key to the authorized_keys in the instance. With this we make the ansible server authorized to the instance. We can even automate this process of adding the public key. But I would not like to cover it here. If you are interested you will find a roles built for the purpose.
  2. If we are not able to ssh even after clearing the above issue. The problem can be we have not set the instance ip in the right location. If you have created the inventory.ini and you are trying to run playbook to the instance set in inventory.ini but not able to. We need to change the default ansible inventory location that is set to /etc/ansible/hosts

    Either change the host location in ansible.cfg or set the ip address to the default host location.

    #inventory      = /etc/ansible/hosts

    Change this to your newly created inventory.ini.

These are few of the issues that I used to face during my initial setups.

landing into docker multistage build

When you browse through official docs of docker or few of the videos on youtube. You will find, a single answers from all, multistage will all reduce the complexity and reduce the docker image size. Yes, that is true. Today, I will be talking about what made me utilize the docker multi stage in my Dockerfile.

How many of you have gone through this? The script or codes that were working last time suddenly crashes the next day. I had a Dockerfile build with alpine as base image. It was working like a charm. After running the same Dockerfile now, It was throwing errors on installing the composer. I know, installing the composer is just one line to add but every effort I put on was into vein. The only option left for me was to add composer separately or use already successfully running composer.

I tried it and was working like a charm again. So initially I deleted the line which was to add composer in my Dockerfile: which was a main culprit. Than added the composer in the first line of the Dockerfile as shown bellow.

FROM composer:1.5.1 AS my_composer

This line will simply add the docker image with composer. I added this because this had the composer installed in alpine os.

Now time to add my real alpine image that I was using and that had the trouble.

FROM alpine:3.8

So now we have a Two FROM here, which is why we call it multistage docker.

Now time to add the composer from first image to the second. Which can be easily done as bellow.

COPY --from=my_composer /usr/bin/composer  /usr/bin/composer

This will copy the /usr/bin/composer from first composer image to the second alpine image. We use –from to specify from where to copy. The first argument is the source and second being the destination.

So I build the image again. It worked flawless. But most of you might have the issue like php not found. This is because the php path is not set in the second image so need to set it. I just added a symlink for the solution as follow.

RUN ls -s /usr/bin/php7  /usr/bin/php

Though, this article does not describe about how to use multistage docker. I hope, putting my real experience might have even helped better to realize when do we really need it. In future I will try to put on some easy materials too, regarding how to add or write multi stage docker file.

Creating a wordpress development environment using docker -compose in 1 minute

Yes it takes no more than 1 minute to setup WordPress  development environment in docker  container. Here we will be using the docker-compose.yml file which can be easily found even in the docker officials  documentations. And we will run the simple docker-compose command to run the service and run the WordPress.

Prerequisites :

Basic knowledge and understanding of  docker and docker-compose.

Here is the link to from where I got the contents for docker-compose file . I  have added few descriptions to these docker-compose for someone who might be unfamiliar with it.

docker compose file for wordpress

We are  using two  images here

  1. WordPress image : Behind the scene its maintaining php and web servers required. And using these image you don’t even have to bother about it.
  2. MYSQL image: The mysql image is used for the database.

What is done with docker-compose can also be done by creating your own Dockerfile or by running both service and than linking them but that is more time consuming and becomes frustrating when the number of containers to run grows .With docker-compose everything becomes easy . The best part I like being linking the multiple container together with ease.

How is the 2 containers linked here:

As the service  WordPress is dependent on database service db. You can see a line with  depends_on and with the service name allocated ,which is responsible for linking the two services  we have  together.

Command used to start containers:

As it has the docker compose  file we  will be using the docker-compose command to start the services.

docker compose up -d 

These command will run the two service and -d for running it in the background.

Here is the short video on how I ran the services


Creating your own private docker registry

The best thing about  the docker is its centralized docker registry. We can push and  pull the images  we require from it easily. But what if we want our own registry for keeping all the docker images we build. Keeping the images in self made docker  registry will make it easy to manage and faster too in some cases and will be totally on yours  control.

steps to create docker private registry:

There are some other ways to create docker private registry but we will follow what is also being clearly mentioned in the officials docker documentations page. The thing  to understand  is we will be running a docker container which we can use as a private registry. The cool thing here is there  is another image available to create private registry. We run that image and create a registry container and keep it private with some authentication set. The name of the image is registry:2.

Start the docker container as you normally do:

$ docker run -d -p 5000:5000 --restart=always --name privateregistry registry:2

So the container will be running in a daemon mode and the name is privateregistry and is running locally in 5000 ports. Check with docker ps command to verify if registry:2 is running or not.

how to push images to private registry just made:

So our private  registry docker container is up and running now we will push one image to it. For the sake of example we will simply pull alpine image change its tag and push to our private registry.

lets pull alpine image

docker pull alpine

we pulled the alpine image from official docker repo. Now we will make it ready  to push to our local repository.

docker tag alpine localhost:5000/private-alpine

So here, we added new tag to the alpine image we  recently pulled. The format to add the tag is hostname : port / imagename . So when pushing the image the docker  will understand that it has to be pushed the particular hosts and ports.

Now we are totally ready to push our  docker image to the local  repository.

docker push localhost:5000/private-alpine

Pulling from the private registry

docker pull localhost:5000/private-alpine

The format for pulling is just the same as you pull  from docker registry ,the only difference you will find is the image tag that has to be maintained to pushed to your private repository.

creating docker private registry guide

Watch the short video shows how it works:

The video is on progress and will be posted shortly:

Final Note on docker private registry:

We did this to demonstrate how to create your own docker private registry but for security purpose you have to implement some authentications  so that only authorized users  can make the entry and fetch from the registry build.

my first hello world python script build running on jenkins

It took 5 minutes at max to setup a jenkins on my local machine. Here is the installation guide I followed to set it up on ubuntu 16.04 machine

  1. wget -q -O - | sudo apt-key add -

    Added the repository key which should return ok

  2. echo deb binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list

    Adding the package repository address to servers sources.list

  3. sudo apt-get update
  4. sudo apt-get install jenkins

    So the jenkin is ready and up and running at this url and port 8080 Typing this on my browser url will start a jenkin setup page asking for the initial password. This page appears when you are opening the jenkins for the first time. The locations where you can find the password is also given in the page . Use the sudo cat command followed by the path given which will return you a password which you need to add it there than a user registration process starts .

Note: Please fill the form properly and create a user account don’t skip it. Last time i skipped it to use the default user name and password but that did’t work and I re-installed the jenkins. That’s it we are ready to start with jenkins

This is what I did after jenkins installation

  1. Created a git hub repository
  2. Created a simple hello world script named it as
  3. Created my first job to start building my simple hellow_world script.

Jenkins jobs setup details here:

  1. Added a git repository that I created
  2. Added the build section with following bash script
  3. Apply and save it
  4. Start the build by selecting the project and clicking on build now.
  5. Output

successfully_running hellow world python in jenkins


Worked as expected ,happy me and twitted on twitter with big large output you see above.

setup puppet server and puppet agent on ubuntu 16.04 docker container

We are going to learn here how to setup puppetserver (puppetmaster) and puppet agent (puppet) on ubuntu 16.04

Environment setup for puppet

Lets not create a virtual box which is quite heavier than docker so here I will be using the docker container with Ubuntu 16.04 images and all necessary packages like curl, vim installed. If you don’t know about docker please refer about it first or optionally you can setup the environment using virtualbox. The problem with virtual box is as they takes a lot of memory size and hardware, you need a good configuration system with enough memory to allocate the resources to newly created virtual box .

Here the environment for our puppet master and agent will be something like bellow

  1. a docker container with ubuntu 16.04 images used as puppet master
  2. a docker container with ubuntu 16.04 images used as puppet agent

You can also use some other operating system of your choice like centos and create your lab environment. if you are using the docker container than you may need to add the curl and whatever does not comes  build in with docker.

How to setup puppet master in ubuntu 16.04

Assuming we are inside the first docker container where we will set up the puppet master. Type the following commands where we are just adding the puppetlab packages there after we will be installing the puppet master (puppetserver)

curl -O
sudo dpkg -i puppetlabs-release-pc1-xenial.deb
sudo apt-get update

Note same will be used for the puppet agent initial package setup

Now we are ready to install puppet master.

apt-get install puppetserver

This will install the puppetserver in our first docker container.

Verify the puppetserver is installed properly

To verify if the puppetserver is installed properly type

puppetserver --version

which should return a puppetserver version.

if you get the problem like puppetserver command not found though your puppetserver was installed successfully, the reason can be that your puppetserver is not being setup in the environment path and hence you are not able to run the command. So to set it up in the environment path follow these guide. how to fix puppetserver not found error.

We have successfully added the puppetserver now need few setup to be done which will be the way to define this is the puppetserver(puppetmaster).

open the puppet configuration file /etc/puppetlabs/puppet/puppet.conf


Setting up the puppet agent in ubuntu 16.04

follow the above process of installing the puppet labs package ,we will be adding the same package in the puppet agent too and than create an agent. After finishing the above method now time to add the puppet agent.

installing puppet agent in ubuntu 16.04

$ sudo apt-get install puppet

verify to see if it is installed or not

type puppet which should result in showing some descriptions

configure puppet agent conf file

the puppet agent config file is located here /etc/puppet/puppet.conf


Here we are letting the puppet agent know that the puppet master machine with the name we assigned for the puppet master

set the host name of puppet master in /etc/hosts

put the ip address of the puppet master followed by its name. Suppose the ip address of puppet master machine is So the /etc/hosts will have    puppetmaster

Summarizing The puppet master and puppet agent installation process

  1. get the package from the puppetlabs
  2. install the puppetserver
  3. configure the puppetserver conf file in /etc/puppetlabs/puppet/puppet.conf  adding the dns_alt_names setting
  4. Repeat step-1 in puppet agent machine
  5. install the puppet ,which is the puppet agent for us
  6. configure the puppetagent conf file located in /etc/puppet/puppet.conf adding the servername setting
  7. add the puppet master machine ip address and name in /etc/hosts

This is the overall steps required in puppet master and puppet agent setup in ubuntu 16.04 machine.

Note: Some path may vary depending upon the version of puppet you are using. This has been tested on new version of puppet.

Fixing the puppetserver command not found ERROR

I recently installed a puppetserver successfully following the official guides. My setup was something like this. A docker container with ubuntu 16.04.3 LTS. The puppetserver was installed successfully and when trying to run the command puppetserver came through this error ” Puppetserver command not Found “.

The problem was in the latest puppetserver version the puppetserver path is not added to the $PATH so added and the problem got solved.

Here is how to add the program path to the $PATH


This will add the new path to the $PATH. To verify it use this command

echo $PATH which should show the recently added path

Extra tips on puppetserver certificate generation:

As we know this command puppet master –no-daemonize –verbose will generate a certificate in puppet server (master) but remember to stop the puppet master before running this else you might come up with some error message like bellow

Error: Could not run: Could not create PID file: /var/run/puppetlabs/puppetserver/

stop the puppetserver and run the puppet certificate generation in master node again.


A quick guide to docker container creation

lets create a docker container with Ubuntu operating system. You can choose the OS of your choice. We choose to use docker over virtual box because docker container are too lightweight compare to virtual box create with vagrant. Follow the following easy steps and you are done

1. Pull the image:

we need the image of the operating system we will use in our container. Its just like installing a windows or Ubuntu operating system in your system. Consider your laptop as a container and the operating system as an image. To get the image go to dockerhub and search for the image you want or simply use this command. If you are confused about what command to use. The command will also be provided in the docker hub image descriptions page.

docker pull ubuntu

2. Create a container

It is going to take some time for the image file to download. After it is downloaded create a container with the bellow docker command. The command will created a docker container with ubuntu OS.

docker run -it ubuntu

And the docker container with Ubuntu is ready to run.

3. Get container list and start the one you want

To get the containers list use the bellow command

docker ps -a

This will show the dockers containers with some details like ID, Names and other details for now we will just require ID or the Names to start the container

Lets start the container

docker start <container_Name or Container_ID>

We have started the docker container to check the running container just type

docker ps

in step 3 we used -a to get all the containers list but not using it will give the running containers.


Get into the created container:

Now everything is ready . What everyone would expect or worry about is how to get into that container. Use the following command, which will take you to the respective container and open bash shell.

docker exec -it <container_id or Name> /bin/bash

Some Reminder:

Run the command as root user or use sudo else you may run into some error message.


Creating a docker container is very easy and take fewer time than vagrant. If there is any problem or doubts or got some error please comment bellow. In future I will also try to write or talk about why use the docker over vagrant