Skip to content

taragurung

2 initial ansible setup to make it run smoothly

After installing the ansible. Normally these was the two basic setup that made me stuck my ansible  playbook. Here, I have tried to sum up, what  are the issues that appears and how can we resolve it.

The ansible stuck in gathering facts:

when we first try to login to a instance for the first time we get a message like

Are you sure you want to continue connecting (yes/no)?

One way  to get  rid of it is manually  login to the instance and go through this process manually. But we are here for automations. So to get rid of it, we simply need to make one setting in ansible config files.

Ansible config file is available in this location /etc/ansible/ansible.cfg Here just need to uncomment this default setting, that is commented by default.

# uncomment this to disable SSH key host checking
host_key_checking = False

Ansible playbook can ssh

There  might be multiple reason for it but lets see what else it can be?

  1. Not added ssh public key:
    Make sure you  have added the ansible public key to the authorized_keys in the instance. With this we make the ansible server authorized to the instance. We can even automate this process of adding the public key. But I would not like to cover it here. If you are interested you will find a roles built for the purpose.
  2. If we are not able to ssh even after clearing the above issue. The problem can be we have not set the instance ip in the right location. If you have created the inventory.ini and you are trying to run playbook to the instance set in inventory.ini but not able to. We need to change the default ansible inventory location that is set to /etc/ansible/hosts

    Either change the host location in ansible.cfg or set the ip address to the default host location.

    #inventory      = /etc/ansible/hosts

    Change this to your newly created inventory.ini.

These are few of the issues that I used to face during my initial setups.

landing into docker multistage build

When you browse through official docs of docker or few of the videos on youtube. You will find, a single answers from all, multistage will all reduce the complexity and reduce the docker image size. Yes, that is true. Today, I will be talking about what made me utilize the docker multi stage in my Dockerfile.

How many of you have gone through this? The script or codes that were working last time suddenly crashes the next day. I had a Dockerfile build with alpine as base image. It was working like a charm. After running the same Dockerfile now, It was throwing errors on installing the composer. I know, installing the composer is just one line to add but every effort I put on was into vein. The only option left for me was to add composer separately or use already successfully running composer.

I tried it and was working like a charm again. So initially I deleted the line which was to add composer in my Dockerfile: which was a main culprit. Than added the composer in the first line of the Dockerfile as shown bellow.

FROM composer:1.5.1 AS my_composer

This line will simply add the docker image with composer. I added this because this had the composer installed in alpine os.

Now time to add my real alpine image that I was using and that had the trouble.

FROM alpine:3.8

So now we have a Two FROM here, which is why we call it multistage docker.

Now time to add the composer from first image to the second. Which can be easily done as bellow.

COPY --from=my_composer /usr/bin/composer  /usr/bin/composer

This will copy the /usr/bin/composer from first composer image to the second alpine image. We use –from to specify from where to copy. The first argument is the source and second being the destination.

So I build the image again. It worked flawless. But most of you might have the issue like php not found. This is because the php path is not set in the second image so need to set it. I just added a symlink for the solution as follow.

RUN ls -s /usr/bin/php7  /usr/bin/php

Though, this article does not describe about how to use multistage docker. I hope, putting my real experience might have even helped better to realize when do we really need it. In future I will try to put on some easy materials too, regarding how to add or write multi stage docker file.

setting up global variables in jenkins

Jenkins has so many cool plugins to ease our life. Today we will be looking into one of my personal favorite plugin, which allow us to set the global environment variable that can be used to entire jenkin jobs. The name of the plugin is Global Variable String Parameter.

How to use the global variable string parameter plugin in jenkin:

  1. Install the plugin
  2. Once it is successfully installed. Visit jenkins manage jenkins section.
  3. Visit the configure system
  4. Scrolling down bellow on that page, you will find the section where you can add the variable name and its value.
    global variable setup in jenkins
  5. Now save the changes and you can use the variable defined similar to how you use other global variables in jenkins. For the example above in image you can use it as $dbname

 

 

 

Running the commands on running docker container using exec

Many time, you may land into the situations where you need to execute some commands on the docker container which is already running. We normally do so because we don’t want to stop the container ,make changes and start over. Instead we can simply pass the commands we want ,to the running container.

Using the docker EXEC command:

Docker has the special command exec for the task. The syntax for using it is the following:

docker exec -i <container-name> <bash> <<< command>

You are reading this means you must be familiar with docker exec command. Normally we use to get into docker container. But here we have just use -i options as we don’t want to get into the container ,but simply interact to it. The command to execute must be passed after <<<

Example: docker exec -i wordpress_contaner bash  <<< ls -al

This command will get into the docker container called wordpress_container and show us all the file locations of the locations where it gets on entering.

Passing the command file to exec command:

Instead of passing the commands after <<< we can also pass the file with the command to execute. The syntax for that ,changes a bit. Instead of <<< we use single less than symbol.

Example of exec with file:

Here, in this example we will try to run the update commands on the mysql running container. We will not pass the update command directly but instead create a file where the sql command will be written and pass the file-source to the command.

SITE_URL="https://staging.com"
echo "UPDATE wp_options SET option_value = \"$SITE_URL\" WHERE option_name = \"siteurl\"" > /tmp/updater.sql

docker exec -i $container_name mysql -uuser -ppassword dbname < /tmp/updater.sql

why doing so ?

Many time you may land into errors when directly passing the commands over exec so in such scenario, you may trying adding the command to file and pass that after < symbol as in the above example.

 

 

How to connect to remote server from jenkins, using ssh

You might have written bash scripts or the groovy scripts or any kind of scripts you are familiar with. In my case I regularly go with bash scripts. Writing bash scripts in Jenkins to accomplish the tasks on the same machine is just as how you write bash script in your system and test it. But today we will learn to connect to another server (remove server), make it communicate or do some action on the server it gets connected to and learn how we can pass value while connecting to it.

Some basic of SSH is prerequisite:

To communicate among servers we use ssh command in linux machine. If you are in windows machine, you might be familiar with putty. If you are not familiar with ssh and its syntax, get familiar with it first. A simple syntax and example is given bellow. The ip address provided is just the sample. You need to replace with real ip address of the server.

ssh syntax connecting with key-file:  ssh <username>@<machine-ip>

ssh ubuntu@192.168.0.3

ssh syntax connecting with key-file: ssh <username>@<machine-ip>  -i  <key-file>

ssh ubuntu@192.168.0.3 -i keyfile.pem

Writing bash script in JENKINS execute shell scripts sections:

#!/bin/bash

#will pass this variables to the connected server when SSHing
var1="hellow world"

ssh -tt -o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no ubuntu@some-ip "name=$var1" '
echo $name
ls -al
'

Note: The codes to execute inside the remote machine must be kept inside ‘ ‘ single quotes or you can use double quotes. But remember not to repeat that inside. Suppose if we are using the single quotes than we can use double quotes where we need to define the string variables.

-o UserKnownHostsFile=/dev/null -o StrictHostKeyChecking=no

This will eliminate what appears when we ssh for the first time. To choose ‘yes’ or ‘no’

Also put the jenkins servers public id_rsa.pub key that can be found in ~/.ssh/ to  the authorized_keys in the server which it will be accessing so that it won’t ask for the password.

For example if jenkins is running in server1 and is trying to access server2 then server2 need to add the key of server1 to it’s authorized_keys section.

Watch video instead about jenkins and ssh:

docker external network,connecting to docker external network,composer

The purpose of the this post is to learn how to connect a docker service to the existing or already created docker network. We will be creating a docker network and write the docker-compose.yml file with a service connecting to external network.

Lets create one docker network with overlay driver. For simplicity you may opt not to not use the driver and subnet option that I am using here in network creation.

docker network create --driver overlay --subnet10.0.9.0/24 myexternal-network

Now lets crate a docker docker-compose.yml file and add the external network to it.

docker-compose.yml

version:"3.4"
networks:
     myexternal-network:
         external: true

services:
   myapp:
       image: imagename
       networks:
           - myexternal-network

So in the networks we usually define a network but here we added a network and represented it as external. adding the network to services are the same like we do for other networks.

Note:
As we are using the external network it must be already available before running the docker service else will throw warning message.

Creating a wordpress development environment using docker -compose in 1 minute

Yes it takes no more than 1 minute to setup WordPress  development environment in docker  container. Here we will be using the docker-compose.yml file which can be easily found even in the docker officials  documentations. And we will run the simple docker-compose command to run the service and run the WordPress.

Prerequisites :

Basic knowledge and understanding of  docker and docker-compose.

Here is the link to from where I got the contents for docker-compose file .

https://docs.docker.com/compose/wordpress/ I  have added few descriptions to these docker-compose for someone who might be unfamiliar with it.

docker compose file for wordpress

We are  using two  images here

  1. WordPress image : Behind the scene its maintaining php and web servers required. And using these image you don’t even have to bother about it.
  2. MYSQL image: The mysql image is used for the database.

What is done with docker-compose can also be done by creating your own Dockerfile or by running both service and than linking them but that is more time consuming and becomes frustrating when the number of containers to run grows .With docker-compose everything becomes easy . The best part I like being linking the multiple container together with ease.

How is the 2 containers linked here:

As the service  WordPress is dependent on database service db. You can see a line with  depends_on and with the service name allocated ,which is responsible for linking the two services  we have  together.

Command used to start containers:

As it has the docker compose  file we  will be using the docker-compose command to start the services.

docker compose up -d 

These command will run the two service and -d for running it in the background.

Here is the short video on how I ran the services

 

Creating your own private docker registry

The best thing about  the docker is its centralized docker registry. We can push and  pull the images  we require from it easily. But what if we want our own registry for keeping all the docker images we build. Keeping the images in self made docker  registry will make it easy to manage and faster too in some cases and will be totally on yours  control.

steps to create docker private registry:

There are some other ways to create docker private registry but we will follow what is also being clearly mentioned in the officials docker documentations page. The thing  to understand  is we will be running a docker container which we can use as a private registry. The cool thing here is there  is another image available to create private registry. We run that image and create a registry container and keep it private with some authentication set. The name of the image is registry:2.

Start the docker container as you normally do:

$ docker run -d -p 5000:5000 --restart=always --name privateregistry registry:2

So the container will be running in a daemon mode and the name is privateregistry and is running locally in 5000 ports. Check with docker ps command to verify if registry:2 is running or not.

how to push images to private registry just made:

So our private  registry docker container is up and running now we will push one image to it. For the sake of example we will simply pull alpine image change its tag and push to our private registry.

lets pull alpine image

docker pull alpine

we pulled the alpine image from official docker repo. Now we will make it ready  to push to our local repository.


docker tag alpine localhost:5000/private-alpine

So here, we added new tag to the alpine image we  recently pulled. The format to add the tag is hostname : port / imagename . So when pushing the image the docker  will understand that it has to be pushed the particular hosts and ports.

Now we are totally ready to push our  docker image to the local  repository.

docker push localhost:5000/private-alpine

Pulling from the private registry

docker pull localhost:5000/private-alpine

The format for pulling is just the same as you pull  from docker registry ,the only difference you will find is the image tag that has to be maintained to pushed to your private repository.

creating docker private registry guide

Watch the short video shows how it works:

The video is on progress and will be posted shortly:

Final Note on docker private registry:

We did this to demonstrate how to create your own docker private registry but for security purpose you have to implement some authentications  so that only authorized users  can make the entry and fetch from the registry build.

my first hello world python script build running on jenkins

It took 5 minutes at max to setup a jenkins on my local machine. Here is the installation guide I followed to set it up on ubuntu 16.04 machine

  1. wget -q -O - https://pkg.jenkins.io/debian/jenkins-ci.org.key | sudo apt-key add -

    Added the repository key which should return ok

  2. echo deb https://pkg.jenkins.io/debian-stable binary/ | sudo tee /etc/apt/sources.list.d/jenkins.list

    Adding the package repository address to servers sources.list

  3. sudo apt-get update
  4. sudo apt-get install jenkins

    So the jenkin is ready and up and running at this url and port 8080

0.0.0.0:8080 Typing this on my browser url will start a jenkin setup page asking for the initial password. This page appears when you are opening the jenkins for the first time. The locations where you can find the password is also given in the page . Use the sudo cat command followed by the path given which will return you a password which you need to add it there than a user registration process starts .

Note: Please fill the form properly and create a user account don’t skip it. Last time i skipped it to use the default user name and password but that did’t work and I re-installed the jenkins. That’s it we are ready to start with jenkins

This is what I did after jenkins installation

  1. Created a git hub repository
  2. Created a simple hello world script named it as hellow_world.py
  3. Created my first job to start building my simple hellow_world script.

Jenkins jobs setup details here:

  1. Added a git repository that I created
  2. Added the build section with following bash script
    
    #!/bin/sh
    python hellow_world.py
  3. Apply and save it
  4. Start the build by selecting the project and clicking on build now.
  5. Output

successfully_running hellow world python in jenkins

 

Worked as expected ,happy me and twitted on twitter with big large output you see above.

virtual host setup in apache webserver for developer

I was once a xampp users when I used to work on windows machine. Slowly  shifted to linux machine with ubuntu. It’s quite overwhelming in the shift from windows to linux. Everything happens in just a click in windows but the fun part in linux is you get to play with everything going around.

Let’s learn to setup a webserver using apache and configure virtualhost on it.

Install apache:
(in ubuntu use)

sudo apt-get install apache2

(in centos use)


sudo yum install httpd

After successfully installing apache start the apache service. There are two ways you can start it.

sudo service apache2 start

or  type


/etc/init.d/apache2 start

I love these second one by getting into init.d when I am running the apache on docker container.

The location might be quite different based on the apache version you are using and the linux destro type.

Run to test:

when you first start your apache server it will show you kind of some warning message and the ip address. The IP address shown is the ip address where the apache will be running. Later we will see in details about the hosts names and all for now we will use that ip-address to see if the apache is running. On successful run the apache default page will shown which comes from this default location.

/var/www/html. Take it as the htdocs folder in xampp or www in wampp clear huh !

Some virtual host understanding

Up to now what we have accomplished is successfully run a apache web server. Also we served one website in /var/www/html file locations. What if we want to host or server other website too in that same web server. we are not going to create another webserver but we will utilize to run multiple websites in the same web server with different ip address or with different hostname but running in same ip address.

From windows users prospective who are familiar with xampp. It will be like adding multiple website inside the htdocs folder separated by folders that is what we do in xampp. But here we will use the concept of virtual hosting. It’s like creating a multiple webserver running a separate website. The reality is we are running a single apache webserver, clear huh !

Lets get into the setting parts of virtual host

The default page that got loaded when we run the default host was the default virtual host that gets run when no other virtual host matching the url are found.

  1. Name based virtual host:

The trick is we are running in the same ip address but the locations is different. ie

create a two directory in /var/www/html to serve as different separate website
/var/www/html/vhost1
/var/www/html/vhost2

Now we will setup the apache config file to serve both as a virtual host running in same ip address. that we will configure in /etc/hosts when working locally.

defining 2 virtual hosts in /etc/apache2/apache2.conf

NameVirtualHost *:80
#vhost1
<VirtualHost *:80>
ServerName vhost1.com
DocumentRoot /var/www/html/vhost1
</VirtualHost>

#vhost2
<VirtualHost *:80>
ServerName vhost2.com
DocumentRoot /var/www/html/vhost2
</VirtualHost>

This is how we define the virtual hosts,

setting the virtualhost name for us here we have vhost1.com and vhos2.com. When you try to browse it it wont open because these are the hostname we generated to use locally ,they are not globally available domain so to make it available within our system we add it to /etc/hosts. Add this line of code to the /etc/hosts

ip-address(of the localmachine) vhost1.com vhost2.com

Now when user tries to browse vhost1.com the virtual host setting will run the file in /var/www/html/vhost1 similarly for the vhost2.com.

These is the easy way to create a virtual hosting in apache.

2. IP based virtual host:

Another way to create a virtual host will be to give a separate ip address and a separate root document. Unlike what we did previously of assigning a different document root under same IP address.

Conclusion

This is how you can easily create as many virtual host as you want and run your application out there. These was just the basic understanding you would required to get started with apache and virtualhost setup. If there is anything you would like to know or I might be missing than drop me your message. Thanks