{"id":46274,"date":"2017-02-22T16:59:10","date_gmt":"2017-02-22T11:29:10","guid":{"rendered":"http:\/\/www.tothenew.com\/blog\/?p=46274"},"modified":"2017-02-22T16:59:10","modified_gmt":"2017-02-22T11:29:10","slug":"how-to-setup-kubernetes-master-ha-on-centos","status":"publish","type":"post","link":"https:\/\/www.tothenew.com\/blog\/how-to-setup-kubernetes-master-ha-on-centos\/","title":{"rendered":"How to Setup Kubernetes Master HA on CentOS?"},"content":{"rendered":"<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-46291 \" src=\"\/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha2.png\" alt=\"blogha2\" width=\"519\" height=\"220\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha2.png 844w, \/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha2-300x127.png 300w, \/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha2-624x264.png 624w\" sizes=\"(max-width: 519px) 100vw, 519px\" \/><\/p>\n<p>This blog describes how to set up a high-availability (HA) <a title=\"How to install Kubernetes on CentOS?\" href=\"http:\/\/www.tothenew.com\/blog\/how-to-install-kubernetes-on-centos\/\">Kubernetes cluster<\/a>.\u00a0This is an advanced topic and setting up a truly reliable, highly available distributed system requires few steps to be performed. We will go into each of these steps in detail, but a summary will help the user as a guide.<\/p>\n<p>Here&#8217;s what the system should look like when it&#8217;s finished:<\/p>\n<p><img decoding=\"async\" loading=\"lazy\" class=\"aligncenter wp-image-46290 size-full\" src=\"\/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha.png\" alt=\"blogha\" width=\"960\" height=\"720\" srcset=\"\/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha.png 960w, \/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha-300x225.png 300w, \/blog\/wp-ttn-blog\/uploads\/2017\/02\/blogha-624x468.png 624w\" sizes=\"(max-width: 960px) 100vw, 960px\" \/><\/p>\n<p><span style=\"font-weight: 400\"><strong>Prerequisites:<\/strong> <\/span>You should have an existing Kubernetes Cluster on Bare Metal on CentOS. It means Master1 is already configured for the cluster but we need to make some changes on existing master to create ETCD cluster. Refer to this blog to learn <a title=\"How to install Kubernetes on CentOS?\" href=\"http:\/\/www.tothenew.com\/blog\/how-to-install-kubernetes-on-centos\/\">how to setup a Kubernetes cluster on CentOS<\/a>.<\/p>\n<p><strong>Assumptions:<\/strong> We are taking following assumptions for the blog:<\/p>\n<p>[js]Master1: 172.16.0.1<\/p>\n<p>Master2:172.16.0.2<\/p>\n<p>Master3: 172.16.0.3[\/js]<\/p>\n<p><strong>Steps to configure Kubernetes Master HA<\/strong><\/p>\n<p><b>1. Install Kubernetes on Master2 and Master3:\u00a0<\/b>Create a new repo for Kubernetes using the following command:<\/p>\n<p>[js]vi \/etc\/yum.repos.d\/virt7-docker-common-release.repo<\/p>\n<p>[virt7-docker-common-release]<\/p>\n<p>name=virt7-docker-common-release<br \/>\nbaseurl=http:\/\/cbs.centos.org\/repos\/virt7-docker-common-release\/x86_64\/os\/<br \/>\ngpgcheck=0[\/js]<\/p>\n<p>Now Install Kubernetes:<\/p>\n<p>[js]yum -y install &#8211;enablerepo=virt7-docker-common-release kubernetes etcd flannel[\/js]<\/p>\n<p>The above command also installs Docker and cadvisor.<\/p>\n<p><b>2. Create ETCD Cluster using following etcd configuration:\u00a0<\/b>ETCD configuration file is located at \/etc\/etcd\/etcd.conf<\/p>\n<p><b>For Master1<\/b><\/p>\n<p>[js]# [member]<\/p>\n<p>ETCD_NAME=infra0<\/p>\n<p>ETCD_DATA_DIR=&quot;\/var\/lib\/etcd\/default.etcd&quot;<\/p>\n<p>ETCD_LISTEN_PEER_URLS=&quot;http:\/\/172.16.0.1:2380&quot;<\/p>\n<p>ETCD_LISTEN_CLIENT_URLS=&quot;http:\/\/172.16.0.1:2379.http:\/\/127.0.0.1:2379&quot;<\/p>\n<p>#<\/p>\n<p>#[cluster]<\/p>\n<p>ETCD_INITIAL_CLUSTER=&quot;infra0=http:\/\/172.16.0.1:2380,infra1=http:\/\/172.16.0.2:2380,infra2=http:\/\/172.16.0.3:2380&quot;<\/p>\n<p>ETCD_INITIAL_CLUSTER_STATE=&quot;new&quot;<\/p>\n<p>ETCD_INITIAL_CLUSTER_TOKEN=&quot;etcd-cluster&quot;<\/p>\n<p>ETCD_ADVERTISE_CLIENT_URLS=&quot;http:\/\/172.16.0.1:2379&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>For Master2<\/b><\/p>\n<p>[js]# [member]<\/p>\n<p>ETCD_NAME=infra1<\/p>\n<p>ETCD_DATA_DIR=&quot;\/var\/lib\/etcd\/default.etcd&quot;<\/p>\n<p>ETCD_LISTEN_PEER_URLS=&quot;http:\/\/172.16.0.2:2380&quot;<\/p>\n<p>ETCD_LISTEN_CLIENT_URLS=&quot;http:\/\/172.16.0.2:2379.http:\/\/127.0.0.1:2379&quot;<\/p>\n<p>#<\/p>\n<p>#[cluster]<\/p>\n<p>ETCD_INITIAL_CLUSTER=&quot;infra0=http:\/\/172.16.0.1:2380,infra1=http:\/\/172.16.0.2:2380,infra2=http:\/\/172.16.0.3:2380&quot;<\/p>\n<p>ETCD_INITIAL_CLUSTER_STATE=&quot;new&quot;<\/p>\n<p>ETCD_INITIAL_CLUSTER_TOKEN=&quot;etcd-cluster&quot;<\/p>\n<p>ETCD_ADVERTISE_CLIENT_URLS=&quot;http:\/\/172.16.0.2:2379&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>For Master3<\/b><\/p>\n<p>[js]# [member]<\/p>\n<p>ETCD_NAME=infra2<\/p>\n<p>ETCD_DATA_DIR=&quot;\/var\/lib\/etcd\/default.etcd&quot;<\/p>\n<p>ETCD_LISTEN_PEER_URLS=&quot;http:\/\/172.16.0.3:2380&quot;<\/p>\n<p>ETCD_LISTEN_CLIENT_URLS=&quot;http:\/\/172.16.0.3:2379,http:\/\/127.0.0.1:2379&quot;<\/p>\n<p>#<\/p>\n<p>#[cluster]<\/p>\n<p>ETCD_INITIAL_CLUSTER=&quot;infra0=http:\/\/172.16.0.1:2380,infra1=http:\/\/172.16.0.2:2380,infra2=http:\/\/172.16.0.3:2380&quot;<\/p>\n<p>ETCD_INITIAL_CLUSTER_STATE=&quot;new&quot;<\/p>\n<p>ETCD_INITIAL_CLUSTER_TOKEN=&quot;etcd-cluster&quot;<\/p>\n<p>ETCD_ADVERTISE_CLIENT_URLS=&quot;http:\/\/172.16.0.3:2379&quot;<\/p>\n<p>Restart etcd in all the master nodes using following command:<\/p>\n<p>systemctl restart etcd<\/p>\n<p>Run the following command in any one of the etcd node to check if etcd cluster is formed properly:<\/p>\n<p>etcdctl cluster-health<\/p>\n<p>[\/js]<\/p>\n<p><b>3. Configure other Kubernetes master components:\u00a0<\/b>Let\u2019s start with Kubernetes common config<\/p>\n<p><b>On Master<\/b><\/p>\n<p>[js]vi \/etc\/kubernetes\/config<\/p>\n<p># Comma separated list of nodes running etcd cluster<br \/>\nKUBE_ETCD_SERVERS=&quot;&#8211;etcd-servers=http:\/\/Master_Private_IP:2379&quot;<br \/>\n# Logging will be stored in system journal<br \/>\nKUBE_LOGTOSTDERR=&quot;&#8211;logtostderr=true&quot;<br \/>\n# Journal message level, 0 is debug<br \/>\nKUBE_LOG_LEVEL=&quot;&#8211;v=0&quot;<br \/>\n# Should this cluster be allowed to run privileged docker containers<br \/>\nKUBE_ALLOW_PRIV=&quot;&#8211;allow-privileged=false&quot;<br \/>\n# Api-server endpoint used in scheduler and controller-manager<br \/>\nKUBE_MASTER=&quot;&#8211;master=http:\/\/Master_Private_IP:8080&quot;[\/js]<\/p>\n<p><b>On Minion<\/b><\/p>\n<p>[js]vi \/etc\/kubernetes\/config<\/p>\n<p># Comma separated list of nodes running etcd cluster<br \/>\nKUBE_ETCD_SERVERS=&quot;&#8211;etcd-servers=http:\/\/k8_Master:2379&quot;<br \/>\n# Logging will be stored in system journal<br \/>\nKUBE_LOGTOSTDERR=&quot;&#8211;logtostderr=true&quot;<br \/>\n# Journal message level, 0 is debug<br \/>\nKUBE_LOG_LEVEL=&quot;&#8211;v=0&quot;<br \/>\n# Should this cluster be allowed to run privileged docker containers<br \/>\nKUBE_ALLOW_PRIV=&quot;&#8211;allow-privileged=false&quot;<br \/>\n# Api-server endpoint used in scheduler and controller-manager<br \/>\nKUBE_MASTER=&quot;&#8211;master=http:\/\/k8_Master:8080&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>On Master: <\/b><span style=\"font-weight: 400\">API Server Configuration needs to be updated on master<\/span><strong><strong>\u00a0<\/strong><\/strong><\/p>\n<p><span style=\"font-weight: 400\">Copy all the certificates from existing master to the other two master(with same file permission). All the certificates are stored at <\/span><b>\/srv\/kubernetes\/<\/b><strong style=\"font-size: 1rem\"><strong>\u00a0<\/strong><\/strong><\/p>\n<p><span style=\"font-weight: 400\">Now, configure API server as follows:<\/span><\/p>\n<p>[js]vi \/etc\/kubernetes\/apiserver<\/p>\n<p># Bind kube api server to this IP<br \/>\nKUBE_API_ADDRESS=&quot;&#8211;address=0.0.0.0&quot;<br \/>\n# Port that kube api server listens to.<br \/>\nKUBE_API_PORT=&quot;&#8211;port=8080&quot;<br \/>\n# Port kubelet listen on<br \/>\nKUBELET_PORT=&quot;&#8211;kubelet-port=10250&quot;<br \/>\n# Address range to use for services(Work unit of Kubernetes)<br \/>\nKUBE_SERVICE_ADDRESSES=&quot;&#8211;service-cluster-ip-range=10.254.0.0\/16&quot;<br \/>\n# Add your own!<br \/>\nKUBE_API_ARGS=&quot; &#8211;client-ca-file=\/srv\/kubernetes\/ca.crt &#8211;tls-cert-file=\/srv\/kubernetes\/server.cert &#8211;tls-private-key-file=\/srv\/kubernetes\/server.key&quot;<\/p>\n<p>[\/js]<\/p>\n<p><strong><strong>\u00a0<\/strong><\/strong><b>On Master:\u00a0<\/b>Configure Kubernetes Controller Manager<\/p>\n<p>[js]vi \/etc\/kubernetes\/controller-manager<\/p>\n<p>KUBE_CONTROLLER_MANAGER_ARGS=&quot;&#8211;root-ca-file=\/srv\/kubernetes\/ca.crt &#8211;service-account-private-key-file=\/srv\/kubernetes\/server.key&quot;<\/p>\n<p>[\/js]<\/p>\n<p><b>On Master and Minion:\u00a0<\/b>Configure Flanneld<\/p>\n<p>[js]vi \/etc\/sysconfig\/flanneld<\/p>\n<p># etcd url location. Point this to the server where etcd runs<br \/>\nFLANNEL_ETCD=&quot;http:\/\/k8_Master:2379&quot;<br \/>\n# etcd config key. This is the configuration key that flannel queries<br \/>\n# For address range assignment<br \/>\nFLANNEL_ETCD_PREFIX=&quot;\/kube-centos\/network&quot;<br \/>\n# Any additional options that you want to pass<br \/>\nFLANNEL_OPTIONS=&quot;&quot;<\/p>\n<p>[\/js]<\/p>\n<p>Only one master must be active at any particular time so that the cluster remains in the consistent state. For this, we need to configure Kubernetes Controller Manager and Scheduler. Start these two services with <strong>&#8211;leader-elect<\/strong> option.<\/p>\n<p><span style=\"font-weight: 400\">Update their configuration file as follows:<\/span><\/p>\n<p>[js]vi \/etc\/kubernetes\/controller-manager<\/p>\n<p>KUBE_CONTROLLER_MANAGER_ARGS=&quot;&#8211;root-ca-file=\/srv\/kubernetes\/ca.crt &#8211;service-account-private-key-file=\/srv\/kubernetes\/server.key &#8211;leader-elect&quot;<\/p>\n<p>vi \/etc\/kubernetes\/scheduler<\/p>\n<p>KUBE_SCHEDULER_ARGS=&quot;&#8211;leader-elect&quot;<\/p>\n<p>[\/js]<\/p>\n<p><strong><strong>\u00a04.\u00a0<\/strong><\/strong><b>Create a Load Balancer(Internal) as follows:<\/b><\/p>\n<p>[js]<br \/>\n _____ Master1 Port 8080<br \/>\n |<br \/>\nLoad Balancer Port 8080 &#8212; &#8212;- Master2 Port 8080<br \/>\n |<br \/>\n &#8212;&#8211; Master3 Port 8080<\/p>\n<p> _____ Master1 Port 2379<br \/>\n |<br \/>\nLoad Balancer Port 2379 &#8212; &#8212;- Master2 Port 2379<br \/>\n |<br \/>\n &#8212;&#8211; Master3 Port 2379<br \/>\n[\/js]<\/p>\n<p><strong><strong>5.\u00a0<\/strong><\/strong>Now<strong><strong>\u00a0<\/strong><\/strong><span style=\"font-weight: 400\">Replace Master IP in <strong>\/etc\/hosts<\/strong> of all <\/span><b>Minion<\/b><span style=\"font-weight: 400\"> by IP address of Load Balancer and restart all kubernetes service in Minions.<\/span><\/p>\n<p>At this point, we are done with the master components. If we have an existing cluster, this is as simple as reconfiguring our kubelets to talk to the load-balanced endpoint, and restarting the kubelets on each node. If we are turning up a fresh cluster, we will need to install the kubelet and kube-proxy on each worker node, and set the <strong>&#8211;apiserver<\/strong> flag to our replicated endpoint.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>This blog describes how to set up a high-availability (HA) Kubernetes cluster.\u00a0This is an advanced topic and setting up a truly reliable, highly available distributed system requires few steps to be performed. We will go into each of these steps in detail, but a summary will help the user as a guide. Here&#8217;s what the [&hellip;]<\/p>\n","protected":false},"author":968,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"iawp_total_views":27},"categories":[4308,2348,1],"tags":[4374,1892,1883,3965,3984,4437,3979,4375],"aioseo_notices":[],"_links":{"self":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/46274"}],"collection":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/users\/968"}],"replies":[{"embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/comments?post=46274"}],"version-history":[{"count":0,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/posts\/46274\/revisions"}],"wp:attachment":[{"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/media?parent=46274"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/categories?post=46274"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.tothenew.com\/blog\/wp-json\/wp\/v2\/tags?post=46274"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}