How to Install Kubernetes on CentOS?

17 / Jan / 2017 by Arun Dhyani 10 comments



In this blog, we will learn how to setup Kubernetes cluster on servers running on CentOS (Bare-metal installation) as well as deploy add-on services such as DNS and Kubernetes Dashboard.

If you are new to Kubernetes cluster and want to understand its architecture then you can through the blog on the Introduction on Kubernetes.  So, Let’s get started with the installation. 


You need at least 2 servers for setting Kubernetes cluster. For this blog, we are using three servers to form Kubernetes cluster. Make sure that each of these servers has at least 1 core and 2 GB memory.


Cluster Configuration:

  1. Infrastructure Private subnet IP range:
  2. Flannel subnet IP range: (You can choose any IP range just make sure it does not overlap with any other IP range)
  3. Service Cluster IP range for Kubernetes: (You can choose any IP range just make sure it does not overlap with any other IP range)
  4. Kubernetes Service IP: (First IP from service cluster IP range is always allocated to Kubernetes Service)
  5. DNS service IP: (You can use any IP from the service cluster IP range just make sure that the IP is not allocated to any other service)

Step 1: Create a Repo on all Host i.e. Master, Minion1, and Minion2.


vim /etc/yum.repos.d/virt7-docker-common-release.repo



Step 2: Installing Kubernetes, etcd and flannel.


yum -y install –enablerepo=virt7-docker-common-release kubernetes etcd flannel


The above command also installs Docker and cadvisor. 

Step 3: Next Configure Kubernetes Components.

Kubernetes Common Configuration (On All Nodes)

Let’s get started with the common configuration for Kubernetes cluster. This configuration should be done on all the host i.e. Master and Minions. 


vi /etc/kubernetes/config

# Comma separated list of nodes running etcd cluster
# Logging will be stored in system journal
# Journal message level, 0 is debug
# Should this cluster be allowed to run privileged docker containers
# Api-server endpoint used in scheduler and controller-manager


ETCD Configuration (On Master)

Next, we need to configure etcd for Kubernetes cluster. Etcd configuration is stored in /etc/etcd/etcd.conf


vi /etc/etcd/etcd.conf




All the configuration data of Kubernetes is stored in etcd. To increase security, etcd can be bind to the private IP address of the master node. Now, etcd endpoint can only be accessed from the private Subnet.

API Server Configuration (On Master)

API Server handles the REST operations and acts as a front-end to the cluster’s shared state. API Server Configuration is stored at /etc/kubernetes/apiserver.

Kubernetes uses certificates to authenticate API request. Before configuring API server, we need to generate certificates that can be used for authentication. Kubernetes provides ready made scripts for generating these certificates which can be found here.

Download this script and update line number 30 in the file.


# update the below line with the group that exists on Kubernetes Master.
/* Use the user group with which you are planning to run kubernetes services */


Now, run the script with following parameters to create certificates:


bash "" "IP:,IP:,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local"


Where is the IP of the master server, is the IP of Kubernetes service. Now, we can configure API server: 


vi /etc/kubernetes/apiserver

# Bind kube API server to this IP
# Port that kube api server listens to.
# Port kubelet listen on
# Address range to use for services(Work unit of Kubernetes)
# default admission control policies
# Add your own!
KUBE_API_ARGS="–client-ca-file=/srv/kubernetes/ca.crt –tls-cert-file=/srv/kubernetes/server.cert –tls-private-key-file=/srv/kubernetes/server.key"


Note: Please make sure service cluster IP range doesn’t overlap with infrastructure Subnet IP range.

Controller Manager Configuration (On Master)


vi /etc/kubernetes/controller-manager

# Add your own!
KUBE_CONTROLLER_MANAGER_ARGS="–root-ca-file=/srv/kubernetes/ca.crt –service-account-private-key-file=/srv/kubernetes/server.key"


Kubelet Configuration (On Minions)

Kubelet is a node/minion agent that runs pods and make sure that it is healthy. It also communicates pod details to Kubernetes Master. Kubelet configuration is stored in /etc/kubernetes/kubelet

[On Minion1]

vi /etc/kubernetes/kubelet

# kubelet bind ip address(Provide private ip of minion)
# port on which kubelet listen
# leave this blank to use the hostname of server
# Location of the api-server
# Add your own!


[On Minion2]

vi /etc/kubernetes/kubelet

# kubelet bind ip address(Provide private ip of minion)
# port on which kubelet listen
# leave this blank to use the hostname of server
# Location of the api-server
# Add your own!


Before Configuring Flannel for Kubernetes cluster, we need to create network configuration for Flannel in etcd.

So start the etcd node on the master using the following command:

systemctl start etcd

Create a new key in etcd to store Flannel configuration using the following command:

etcdctl mkdir /kube-centos/network

Next, we need to define the network configuration for Flannel:


etcdctl mk /kube-centos/network/config "{ \"Network\": \"\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"


The above command allocates the subnet to the Flannel network. A flannel subnet of CIDR 24 is allocated to each server in Kubernetes cluster.

Note: Please make sure Flannel subnet doesn’t overlap with infrastructure subnet or service cluster IP range.

Flannel Configuration (On All Nodes)

Kubernetes uses Flannel to build an overlay network for inter-pod communication. Flannel configuration is stored in /etc/sysconfig/flanneld


vi /etc/sysconfig/flanneld

# etcd URL location.  Point this to the server where etcd runs
# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
# Any additional options that you want to pass


Step 4: Start services on Master and Minion

On Master


systemctl enable kube-apiserver
systemctl start kube-apiserver
systemctl enable kube-controller-manager
systemctl start kube-controller-manager
systemctl start kube-scheduler
systemctl start kube-scheduler
systemctl enable flanneld
systemctl start flanneld


On Minions


systemctl enable kube-proxy
systemctl start kube-proxy
systemctl enable kubelet
systemctl start kubelet
systemctl enable flanneld
systemctl start flanneld
systemctl enable docker
systemctl start docker


Note: In each host, make sure that IP address allocated to Docker0 is the first IP address in the Flannel subnet, otherwise your cluster won’t work properly. To check this, use “ifconfig” command.  


Step 5: Check Status of all Services

Make sure etcd, kube-apiserver, kube-controller-manager, kube-scheduler, and flanneld is running on Master and kube-proxy, kubelet, flanneld, and docker is running on the slave. 

Deploying Addons in Kubernetes

Configuring DNS for Kubernetes Cluster

To enable service name discovery in our Kubernetes cluster, we need to configure DNS for our Kubernetes cluster. To do so, we need to deploy DNS pod and service in our cluster and configure Kubelet to resolve all DNS queries from this DNS service (local DNS).

You can download DNS Replication Controller and Service YAML from my repository. You can also download the latest version of DNS from official Kubernetes repository (kubernetes/cluster/addons/dns).

Next, use the following command to create a replication controller and service: 


Kubectl create -f DNS/skydns-rc.yaml
Kubectl create -f DNS/skydns-svc.yaml


Note: Make sure you have entered correct cluster IP for DNS service in skydns-svc.yaml

For this blog, we have used as DNS service IP.

Now, configure Kubelet in all Minion to resolve all DNS queries from our local DNS service.


vim /etc/kubernetes/kubelet

# Add your own!
KUBELET_ARGS="–cluster-dns= –cluster-domain=cluster.local"


Restart kubelet on all Minions to load the new kubelet configuration.

systemctl restart kubelet

Configuring Dashboard for Kubernetes Cluster

Kubernetes Dashboard provides User Interface through which we can manage Kubernetes work units. We can create, delete or edit all work unit from Dashboard.

Kubernetes Dashboard is also deployed as a Pod in the cluster. You can download Dashboard Deployment and Service YAML from my repository. You can also download the latest version of Dashboard from official Kubernetes repository (kubernetes/cluster/addons/dashboard)

After downloading YAML, run the following commands from the master:


Kubectl create -f Dashboard/dashboard-controller.yaml
Kubectl create -f Dashboard/dashboard-service.yaml   


Now you can access Kubernetes Dashboard on your browser.

Open http://master_public_ip:8080/ui on your browser.


Note: Don’t forget to secure your Dashboard. You can install Nginx or Apache web server on your master that proxy pass to localhost: 8080 port and enable http_auth on it to secure your Dashboard.

Configuring Monitoring for Kubernetes Cluster

Kubernetes provides detail resource usage monitoring at the container, Pod, and cluster level. The user can monitor their application at all these levels. It provides a user deep insights of their application that allows user to easily find bottlenecks. To enable monitoring, we need to configure the monitoring stack for Kubernetes. Heapster lies at the heart of the monitoring stack. Heapster runs as a POD in the cluster. It discovers all the nodes in the cluster and query usage information from kubelet of all node. This usage information is stored in influxDB and visualized using grafana. More information on this can be found here.

Kubernetes provide ready made YAML configs for the monitoring stack. These YAML configs aren’t meant to be used directly. So, we need to make some changes. Download latest version of monitoring stack from official Kubernetes repository (kubernetes/cluster/addons/cluster-monitoring/influxdb). We only need to update heapster-controller.yaml. Remove the template from the top of the file and replace the variables with their values in the body.

Updated YAML configs for monitoring stack can also be found here.

Using the following command to launch monitoring stack:


kubectl create -f cluster-monitoring/influxdb


Check Cluster Configuration

Next, we need to check if all the addons are working properly.

Run the following command to check if all the addon services are running:  kubectl cluster-info


This command will help users to see if the add-ons are working properly.


comments (10)

  1. srihari

    can I use latest centos – kubernetes 110 repo with these instructions given above

  2. Muk

    Error on running dashboard:
    [root@muk8smaster-corp ~]# kubectl –namespace=kube-system get all
    kubernetes-dashboard k8s-app=kubernetes-dashboard, k8s-app=kubernetes-dashboard 80/TCP 14m

    But on the WebPage:
    kind “Status”
    apiVersion “v1”
    metadata {}
    status “Failure”
    message “endpoints \”kube-ui\” not found”
    reason “NotFound”
    name “kube-ui”
    kind “endpoints”
    code 404

  3. Lusine


    I have installed kubernetes via yum. But for now, I can’t run systemctl enable kube-apiserver and other services on Master server.
    What have I done wrong?


    1. Rumman Ahmed

      Try the below solution it worked for me:
      openssl genrsa -out /tmp/serviceaccount.key 2048

      vim /etc/kubernetes/apiserver:

      vim /etc/kubernetes/controller-manager
      systemctl restart kube-controller-manager.service

  4. sudipta chakraborty

    kubedns does nt work as per your steps. Im nt able to resolve
    kubectl exec -ti busybox — nslookup kubernetes.default
    kubectl exec -it busybox — nslookup kubernetes.default.svc.cluster.local

  5. Govindaraj

    Thanks for the nice writeup. I was seeing below error while running some pods. Any thoughts?

    /var/run/secrets/ no such file or directory.

    # kubectl get serviceAccounts
    default 0 14d


Leave a Reply

Your email address will not be published. Required fields are marked *