Purpose: We have seen various instances of manual configuration on multiple servers which are similar and repetitive. When it comes to automate such tasks an easy process comes in mind – Configuration Management. There are several such tools available for this purpose. Ansible is one such tool that can be used for configuration management tasks, continuous deployment tasks and more.
Scope: This blog covers basics of Ansible, Installation and platform(s) supported by it along with one use case.
What is Ansible
Ansible is a lightweight, open source configuration management system which has an Agentless architecture. It mean that the nodes do not require to install and run background daemons to connect with main control node. This type of architecture reduces the pressure on the network by preventing the nodes to poll control node.
For the demo I have launched three ubuntu AWS ec2 instances. One of them will be control node and other two will be managed nodes. We have to install ansible only on control node.
We can install ansible in two ways. Either, use apt/yum for a stable version or install from source to get development version of Ansible which has advantage of new features when they are implemented.
Run following commands to install ansible from source in control node
apt-get install git git clone git://github.com/ansible/ansible.git --recursive cd ./ansible source ./hacking/env-setup
Also install python modules(paramiko,PyYAML,Jinja2,httplib2) using pip command used by ansible
1. Python 2.6 or higher
2. Windows -not supported
3. OS supported: Almost all Linux and Unix distributions
1. Python 2.4 or later.
2. Windows nodes -from version 1.7
Configuring Inventory File -Default Location -‘/etc/ansible/hosts’
It is used to define which servers ansible will be managing. Since, we are running instances in same VPC it is advisable to use private ip’s.
We can also logically group our servers in inventory file as follows:
We can all these grouped servers in our ansible playbook as hosts:webservers.
Ansible uses SSH keys for authentication between control node and managed nodes. Use ‘ssh-keygen’ to generate public key. Copy the public key generated in ‘/root/.ssh/id_rsa.pub’ to all managed nodes (location:/root/.ssh/authorized_keys) you want ansible to connect to.
Test the connection
From the control machine we will try pinging all the hosts mentioned in /etc/ansible/hosts file
ansible all -m ping
all – Use all defined servers from the inventory file
-m ping – Use the “ping” module, which simply runs the ping command and returns the results
After configuring inventory file, we can run tasks against the hosts defined in inventory file. These tasks are defined in ansible playbooks which are plain english yaml scripts. By default, Ansible run all the tasks parrallelly in all the nodes. We can also configure serial execution of tasks.
Output on running ansible playbook on control node
Login to hosts machines for verification
This approach can be used for Continous Deployment setup where the latest version of war can be pulled from a repository like nexus and can be copied to web servers like jetty.
Ansible also provides a dashboard(Ansible Tower) to manage hosts. It is free to use for up to 30 days beyond which license is required.