{"id":45000,"date":"2017-01-17T09:59:28","date_gmt":"2017-01-17T04:29:28","guid":{"rendered":"http:\/\/www.tothenew.com\/blog\/?p=45000"},"modified":"2017-01-20T11:45:13","modified_gmt":"2017-01-20T06:15:13","slug":"how-to-install-kubernetes-on-centos","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/how-to-install-kubernetes-on-centos\/","title":{"rendered":"How to Install Kubernetes on CentOS?"},"content":{"rendered":"<p>&nbsp;<\/p>\n<p><span style=\"font-weight: 400\"><img decoding=\"async\" loading=\"lazy\" class=\"wp-image-45053 aligncenter\" src=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/kubernetes-logo.png\" alt=\"kubernetes-logo\" width=\"425\" height=\"180\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/kubernetes-logo.png 844w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/kubernetes-logo-300x127.png 300w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/kubernetes-logo-624x264.png 624w\" sizes=\"(max-width: 425px) 100vw, 425px\" \/><\/span><\/p>\n<p><span style=\"font-weight: 400\">In this blog, we will learn how to setup Kubernetes cluster on servers running on CentOS (Bare-metal installation) as well as deploy add-on services such as DNS and Kubernetes Dashboard.<\/span><\/p>\n<p><span style=\"font-weight: 400\">If you are new to Kubernetes cluster\u00a0and want to understand its architecture then you can through the blog on the <a title=\"Getting started with Kubernetes\" href=\"http:\/\/www.tothenew.com\/blog\/understanding-kubernetes-architecture-and-setting-up-a-cluster-on-ubuntu\/\">Introduction on Kubernetes<\/a>. \u00a0So, Let\u2019s get started with the installation.\u00a0<\/span><\/p>\n<p><strong>Prerequisites:<\/strong><\/p>\n<p><span style=\"font-weight: 400\">You need at least 2 servers for setting\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0cluster. For this blog, we are using three servers to form\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0cluster. Make sure that each of these servers has at least 1 core and 2 GB memory.<\/span><\/p>\n<p>Master \u00a0 172.16.0.1<br \/>\nMinion1 172.16.0.2<br \/>\nMinion2 172.16.0.3<\/p>\n<p><strong>Cluster Configuration:<\/strong><\/p>\n<ol>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Infrastructure Private subnet IP range: 172.16.0.0\/16<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Flannel subnet IP range: 172.30.0.0\/16 (You can choose any IP range just make sure it does not overlap with any other IP range)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Service Cluster IP range for\u00a0Kubernetes: 10.254.0.0\/16 (You can choose any IP range just make sure it does not overlap with any other IP range)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">Kubernetes Service IP: 10.254.0.1 (First IP from service cluster IP range is always allocated to Kubernetes Service)<\/span><\/li>\n<li style=\"font-weight: 400\"><span style=\"font-weight: 400\">DNS service IP: 10.254.3.100 (You can use any IP from the service cluster IP range just make sure that the IP is not allocated to any other service)<\/span><\/li>\n<\/ol>\n<p><b>Step 1: Create a Repo on all Host i.e. Master, Minion1, and Minion2.<\/b><\/p>\n<p>[js]<\/p>\n<p>vim \/etc\/yum.repos.d\/virt7-docker-common-release.repo<\/p>\n<p>[virt7-docker-common-release]<br \/>\nname=virt7-docker-common-release<br \/>\nbaseurl=http:\/\/cbs.centos.org\/repos\/virt7-docker-common-release\/x86_64\/os\/<br \/>\ngpgcheck=0<\/p>\n<p>[\/js]<\/p>\n<p><b>Step 2: Installing Kubernetes, etcd and flannel. <\/b><\/p>\n<p>[js]<\/p>\n<p>yum -y install &#8211;enablerepo=virt7-docker-common-release kubernetes etcd flannel<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">The above command also installs Docker and cadvisor.<\/span><strong><strong>\u00a0<\/strong><\/strong><\/p>\n<p><b>Step 3: Next Configure\u00a0<\/b><span style=\"background-color: #f5f6f5\"><b>Kubernetes<\/b><\/span><b>\u00a0Components.<\/b><\/p>\n<p><b>Kubernetes Common Configuration (On All Nodes)<\/b><\/p>\n<p><span style=\"font-weight: 400\">Let\u2019s get started with the common configuration for\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0cluster. This configuration should be done on all the host i.e. Master and Minions.\u00a0<\/span><\/p>\n<p>[js]<\/p>\n<p>vi \/etc\/kubernetes\/config<\/p>\n<p># Comma separated list of nodes running etcd cluster<br \/>\nKUBE_ETCD_SERVERS=&quot;&#8211;etcd-servers=http:\/\/172.16.0.1:2379&quot;<br \/>\n# Logging will be stored in system journal<br \/>\nKUBE_LOGTOSTDERR=&quot;&#8211;logtostderr=true&quot;<br \/>\n# Journal message level, 0 is debug<br \/>\nKUBE_LOG_LEVEL=&quot;&#8211;v=0&quot;<br \/>\n# Should this cluster be allowed to run privileged docker containers<br \/>\nKUBE_ALLOW_PRIV=&quot;&#8211;allow-privileged=false&quot;<br \/>\n# Api-server endpoint used in scheduler and controller-manager<br \/>\nKUBE_MASTER=&quot;&#8211;master=http:\/\/172.16.0.1:8080&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>ETCD Configuration (On Master)<\/b><\/p>\n<p><span style=\"font-weight: 400\">Next, we need to configure etcd for\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0cluster. Etcd configuration is stored in \/etc\/etcd\/etcd.conf<\/span><\/p>\n<p>[js]<\/p>\n<p>vi \/etc\/etcd\/etcd.conf<\/p>\n<p>#[member]<br \/>\nETCD_NAME=default<br \/>\nETCD_DATA_DIR=&quot;\/var\/lib\/etcd\/default.etcd&quot;<\/p>\n<p>ETCD_LISTEN_CLIENT_URLS=&quot;http:\/\/0.0.0.0:2379&quot;<br \/>\n#[cluster]<br \/>\nETCD_ADVERTISE_CLIENT_URLS=&quot;http:\/\/0.0.0.0:2379&quot;<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">All the configuration data of\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0is stored in etcd. To increase security, etcd\u00a0can be bind to the private IP address of the master node. Now,\u00a0etcd endpoint can only be accessed from the private Subnet.<\/span><\/p>\n<p><b>API Server Configuration (On Master)<\/b><\/p>\n<p><span style=\"font-weight: 400\">API Server handles the REST operations and acts as a front-end to the cluster\u2019s shared state. API Server Configuration is stored at \/etc\/kubernetes\/apiserver.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Kubernetes uses certificates to authenticate API request. Before configuring API server, we need to generate certificates that can be used for authentication. Kubernetes provides ready made scripts for generating these certificates which can be found <\/span><a href=\"https:\/\/github.com\/kubernetes\/kubernetes\/blob\/master\/cluster\/saltbase\/salt\/generate-cert\/make-ca-cert.sh\"><span style=\"font-weight: 400\">here<\/span><\/a><span style=\"font-weight: 400\">.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Download this script and update line number 30 in the file.<\/span><\/p>\n<p>[js]<\/p>\n<p># update the below line with the group that exists on Kubernetes Master.<br \/>\n\/* Use the user group with which you are planning to run kubernetes services *\/<br \/>\ncert_group=${CERT_GROUP:-kube}<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">Now, run the script with following parameters to create certificates:<\/span><\/p>\n<p>[js]<\/p>\n<p>bash make-ca-cert.sh &quot;172.16.0.1&quot; &quot;IP:172.16.0.1,IP:10.254.0.1,DNS:kubernetes,DNS:kubernetes.default,DNS:kubernetes.default.svc,DNS:kubernetes.default.svc.cluster.local&quot;<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">Where 172.16.0.1 is the IP of the master server, 10.254.0.1 is the IP of\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0service. Now, we can configure API server:\u00a0<\/span><\/p>\n<p>[js]<\/p>\n<p>vi \/etc\/kubernetes\/apiserver<\/p>\n<p># Bind kube API server to this IP<br \/>\nKUBE_API_ADDRESS=&quot;&#8211;address=0.0.0.0&quot;<br \/>\n# Port that kube api server listens to.<br \/>\nKUBE_API_PORT=&quot;&#8211;port=8080&quot;<br \/>\n# Port kubelet listen on<br \/>\nKUBELET_PORT=&quot;&#8211;kubelet-port=10250&quot;<br \/>\n# Address range to use for services(Work unit of Kubernetes)<br \/>\nKUBE_SERVICE_ADDRESSES=&quot;&#8211;service-cluster-ip-range=10.254.0.0\/16&quot;<br \/>\n# default admission control policies<br \/>\nKUBE_ADMISSION_CONTROL=&quot;&#8211;admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota&quot;<br \/>\n# Add your own!<br \/>\nKUBE_API_ARGS=&quot;&#8211;client-ca-file=\/srv\/kubernetes\/ca.crt &#8211;tls-cert-file=\/srv\/kubernetes\/server.cert &#8211;tls-private-key-file=\/srv\/kubernetes\/server.key&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>Note: Please make sure service cluster IP range doesn\u2019t overlap with infrastructure Subnet IP range.<\/b><\/p>\n<p><b>Controller Manager Configuration (On Master)<\/b><\/p>\n<p>[js]<\/p>\n<p>vi \/etc\/kubernetes\/controller-manager<\/p>\n<p># Add your own!<br \/>\nKUBE_CONTROLLER_MANAGER_ARGS=&quot;&#8211;root-ca-file=\/srv\/kubernetes\/ca.crt &#8211;service-account-private-key-file=\/srv\/kubernetes\/server.key&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>Kubelet Configuration (On Minions)<\/b><\/p>\n<p><span style=\"font-weight: 400\">Kubelet is a node\/minion agent that runs pods and make sure that it is healthy. It also communicates pod details to Kubernetes Master. Kubelet configuration is stored in \/etc\/kubernetes\/kubelet<\/span><\/p>\n<p><span style=\"font-weight: 400\">[On Minion1]<\/span><\/p>\n<p>[js]<br \/>\nvi \/etc\/kubernetes\/kubelet<\/p>\n<p># kubelet bind ip address(Provide private ip of minion)<br \/>\nKUBELET_ADDRESS=&quot;&#8211;address=0.0.0.0&quot;<br \/>\n# port on which kubelet listen<br \/>\nKUBELET_PORT=&quot;&#8211;port=10250&quot;<br \/>\n# leave this blank to use the hostname of server<br \/>\nKUBELET_HOSTNAME=&quot;&#8211;hostname-override=172.16.0.2&quot;<br \/>\n# Location of the api-server<br \/>\nKUBELET_API_SERVER=&quot;&#8211;api-servers=http:\/\/172.16.0.1:8080&quot;<br \/>\n# Add your own!<br \/>\nKUBELET_ARGS=&quot;&quot;<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">[On Minion2]<\/span><\/p>\n<p>[js]<br \/>\nvi \/etc\/kubernetes\/kubelet<\/p>\n<p># kubelet bind ip address(Provide private ip of minion)<br \/>\nKUBELET_ADDRESS=&quot;&#8211;address=0.0.0.0&quot;<br \/>\n# port on which kubelet listen<br \/>\nKUBELET_PORT=&quot;&#8211;port=10250&quot;<br \/>\n# leave this blank to use the hostname of server<br \/>\nKUBELET_HOSTNAME=&quot;&#8211;hostname-override=172.16.0.3&quot;<br \/>\n# Location of the api-server<br \/>\nKUBELET_API_SERVER=&quot;&#8211;api-servers=http:\/\/172.16.0.1:8080&quot;<br \/>\n# Add your own!<br \/>\nKUBELET_ARGS=&quot;&quot;<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">Before Configuring Flannel for\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0cluster, we need to create network configuration for Flannel in etcd. <\/span><\/p>\n<p><span style=\"font-weight: 400\">So start the etcd node on the\u00a0master using the\u00a0following command:<\/span><\/p>\n<p>[js]<br \/>\nsystemctl start etcd<br \/>\n[\/js]<\/p>\n<p><span style=\"font-weight: 400\">Create a new key in etcd to store Flannel configuration using the\u00a0following command:<\/span><\/p>\n<p>[js]<br \/>\netcdctl mkdir \/kube-centos\/network<br \/>\n[\/js]<\/p>\n<p><span style=\"font-weight: 400\">Next, we need to define the network configuration for Flannel:<\/span><\/p>\n<p>[js]<\/p>\n<p>etcdctl mk \/kube-centos\/network\/config &quot;{ \\&quot;Network\\&quot;: \\&quot;172.30.0.0\/16\\&quot;, \\&quot;SubnetLen\\&quot;: 24, \\&quot;Backend\\&quot;: { \\&quot;Type\\&quot;: \\&quot;vxlan\\&quot; } }&quot;<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">The above command allocates the 172.30.0.0\/16 subnet to the Flannel network. A flannel subnet of CIDR 24 is allocated to each server in\u00a0<span style=\"background-color: #f5f6f5\">Kubernetes<\/span>\u00a0cluster. <\/span><\/p>\n<p><b>Note: Please make sure Flannel subnet doesn\u2019t overlap with infrastructure subnet or service cluster IP range.<\/b><\/p>\n<p><b>Flannel Configuration (On All Nodes)<\/b><\/p>\n<p><span style=\"font-weight: 400\">Kubernetes uses Flannel to build an overlay network for inter-pod communication. Flannel configuration is stored in \/etc\/sysconfig\/flanneld<\/span><\/p>\n<p>[js]<\/p>\n<p>vi \/etc\/sysconfig\/flanneld<\/p>\n<p># etcd URL location. \u00a0Point this to the server where etcd runs<br \/>\nFLANNEL_ETCD=&quot;http:\/\/172.16.0.1:2379&quot;<br \/>\n# etcd config key. \u00a0This is the configuration key that flannel queries<br \/>\n# For address range assignment<br \/>\nFLANNEL_ETCD_PREFIX=&quot;\/kube-centos\/network&quot;<br \/>\n# Any additional options that you want to pass<br \/>\nFLANNEL_OPTIONS=&quot;&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>Step 4: Start services on Master and Minion<\/b><\/p>\n<p><strong>On Master<\/strong><\/p>\n<p>[js]<\/p>\n<p>systemctl enable kube-apiserver<br \/>\nsystemctl start kube-apiserver<br \/>\nsystemctl enable kube-controller-manager<br \/>\nsystemctl start kube-controller-manager<br \/>\nsystemctl start kube-scheduler<br \/>\nsystemctl start kube-scheduler<br \/>\nsystemctl enable flanneld<br \/>\nsystemctl start flanneld<\/p>\n<p>[\/js]<\/p>\n<p><strong>On Minions<\/strong><\/p>\n<p>[js]<\/p>\n<p>systemctl enable kube-proxy<br \/>\nsystemctl start kube-proxy<br \/>\nsystemctl enable kubelet<br \/>\nsystemctl start kubelet<br \/>\nsystemctl enable flanneld<br \/>\nsystemctl start flanneld<br \/>\nsystemctl enable docker<br \/>\nsystemctl start docker<\/p>\n<p>[\/js]<\/p>\n<p><strong>Note: <\/strong>In each host, make sure that IP address allocated to Docker0 is the first IP address in the Flannel subnet, otherwise your cluster won&#8217;t work properly. To check this, use &#8220;ifconfig&#8221; command.<strong> \u00a0<\/strong><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-45047\" src=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/blog_random.png\" alt=\"blog_random\" width=\"626\" height=\"274\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/blog_random.png 626w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/blog_random-300x131.png 300w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/blog_random-624x273.png 624w\" sizes=\"(max-width: 626px) 100vw, 626px\" \/><\/p>\n<p><b>Step 5: Check Status of all Services<\/b><\/p>\n<p><span style=\"font-weight: 400\">Make sure etcd, kube-apiserver, kube-controller-manager, kube-scheduler, and flanneld is running on Master and kube-proxy, kubelet, flanneld, and docker is running on the slave.<\/span><strong><strong>\u00a0<\/strong><\/strong><\/p>\n<p><b>Deploying Addons in Kubernetes<\/b><\/p>\n<p><b>Configuring DNS for Kubernetes Cluster<\/b><\/p>\n<p><span style=\"font-weight: 400\">To enable service name discovery in our\u00a0Kubernetes\u00a0cluster, we need to configure DNS for our Kubernetes cluster. To do so, we need to deploy DNS pod and service in our cluster and configure Kubelet to resolve all DNS queries from this DNS service (local DNS).<\/span><\/p>\n<p><span style=\"font-weight: 400\">You can download DNS Replication Controller and Service YAML from my <a href=\"https:\/\/bitbucket.org\/Dhyaniarun\/kubernetes\/src\">repository<\/a>. You can also download the latest version of DNS from official Kubernetes repository (kubernetes\/cluster\/addons\/dns).<\/span><\/p>\n<p><span style=\"font-weight: 400\">Next, use the following command to create a replication controller and service:\u00a0<\/span><\/p>\n<p>[js]<\/p>\n<p>Kubectl create -f DNS\/skydns-rc.yaml<br \/>\nKubectl create -f DNS\/skydns-svc.yaml<\/p>\n<p>[\/js]<\/p>\n<p><strong>Note: Make sure you have entered correct cluster IP for DNS service in skydns-svc.yaml<\/strong><\/p>\n<p>For this blog, we have used 10.254.3.100 as DNS service IP.<\/p>\n<p><span style=\"font-weight: 400\">Now, configure Kubelet in all Minion to resolve all DNS queries from our local DNS service.<\/span><\/p>\n<p>[js]<\/p>\n<p>vim \/etc\/kubernetes\/kubelet<\/p>\n<p># Add your own!<br \/>\nKUBELET_ARGS=&quot;&#8211;cluster-dns=10.254.3.100 &#8211;cluster-domain=cluster.local&quot;<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">Restart kubelet on all Minions to load the new kubelet configuration.<\/span><\/p>\n<p>[js]<br \/>\nsystemctl restart kubelet<br \/>\n[\/js]<\/p>\n<p><b>Configuring Dashboard for Kubernetes Cluster<\/b><\/p>\n<p><span style=\"font-weight: 400\">Kubernetes Dashboard provides User Interface through which we can manage Kubernetes work units. We can create, delete or edit all work unit from Dashboard. <\/span><\/p>\n<p><span style=\"font-weight: 400\">Kubernetes Dashboard is also deployed as a Pod in the cluster. You can download Dashboard Deployment and Service YAML from my <a href=\"https:\/\/bitbucket.org\/Dhyaniarun\/kubernetes\/src\">repository<\/a>. You can also download the latest version of Dashboard from official Kubernetes repository (kubernetes\/cluster\/addons\/dashboard) <\/span><\/p>\n<p><span style=\"font-weight: 400\">After downloading YAML, run the following commands from the master:<\/span><\/p>\n<p>[js]<\/p>\n<p>Kubectl create -f Dashboard\/dashboard-controller.yaml<br \/>\nKubectl create -f Dashboard\/dashboard-service.yaml \u00a0\u00a0<\/p>\n<p>[\/js]<\/p>\n<p><span style=\"font-weight: 400\">Now you can access Kubernetes Dashboard on your browser.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Open <\/span><span style=\"font-weight: 400\">http:\/\/master_public_ip:8080\/ui<\/span><span style=\"font-weight: 400\"> on your browser.<\/span><\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-45057\" src=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube-dash1.png\" alt=\"kube-dash\" width=\"1300\" height=\"653\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube-dash1.png 1300w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube-dash1-300x150.png 300w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube-dash1-1024x514.png 1024w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube-dash1-624x313.png 624w\" sizes=\"(max-width: 1300px) 100vw, 1300px\" \/><\/p>\n<p><b>Note: <\/b>Don\u2019t forget to secure your Dashboard. You can install Nginx or Apache web server on your master that proxy pass to localhost: 8080 port and enable http_auth on it to secure your Dashboard.<\/p>\n<p><b>Configuring Monitoring for Kubernetes Cluster<\/b><\/p>\n<p><span style=\"font-weight: 400\">Kubernetes provides detail resource usage monitoring at the container, Pod, and cluster level. The user can monitor their application at all these levels. It provides a user deep insights of their application that allows user to easily find bottlenecks. To enable monitoring, we need to configure the monitoring stack for Kubernetes. Heapster lies at the heart of the monitoring stack. Heapster runs as a POD in the cluster. It discovers all the nodes in the cluster and query usage information from kubelet of all node. This usage information is stored in influxDB and visualized using grafana. More information on this can be found here.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Kubernetes provide ready made YAML configs for the monitoring stack. These YAML configs aren\u2019t meant to be used directly. So, we need to make some changes. Download latest version of monitoring stack from official Kubernetes repository (kubernetes\/cluster\/addons\/cluster-monitoring\/influxdb). We only need to update heapster-controller.yaml. Remove the template from the top of the file and replace the variables with their values in the body.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Updated YAML configs for monitoring stack can also be <a href=\"https:\/\/bitbucket.org\/Dhyaniarun\/kubernetes\/src\">found here<\/a>.<\/span><\/p>\n<p><span style=\"font-weight: 400\">Using the following command to launch monitoring stack:<\/span><\/p>\n<p>[js]<\/p>\n<p>kubectl create -f cluster-monitoring\/influxdb<\/p>\n<p>[\/js]<\/p>\n<p><b>Check Cluster Configuration<\/b><\/p>\n<p><span style=\"font-weight: 400\">Next, we need to check if all the addons are working properly.<\/span><\/p>\n<p>Run the following command to check if all the addon services are running: \u00a0kubectl cluster-info<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"alignnone size-full wp-image-45004\" src=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube1.png\" alt=\"kube1\" width=\"994\" height=\"115\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube1.png 994w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube1-300x34.png 300w, \/blog\/wp-ttn-blog\/uploads\/2017\/01\/kube1-624x72.png 624w\" sizes=\"(max-width: 994px) 100vw, 994px\" \/><\/p>\n<p>This command will help users to see if the add-ons are working properly.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>&nbsp; In this blog, we will learn how to setup Kubernetes cluster on servers running on CentOS (Bare-metal installation) as well as deploy add-on services such as DNS and Kubernetes Dashboard. If you are new to Kubernetes cluster\u00a0and want to understand its architecture then you can through the blog on the Introduction on Kubernetes. \u00a0So, [&hellip;]<\/p>\n","protected":false},"author":918,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":26},"categories":[2348,1],"tags":[4374,1892,1883,3965,3984,4376,4377,4378,3979,4375],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/45000"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/918"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=45000"}],"version-history":[{"count":0,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/45000\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=45000"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=45000"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=45000"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}