Setup Kubernetes Cluster on AWS EC2

13 / Jan / 2016 by Neeraj Gupta 3 comments

k8s-logo

What is Kubernetes / K8s?

The name Kubernetes originates from Greek, meaning “helmsman” or “pilot”, and is the root of “governor” and “cybernetic”. The Kubernetes is an open-source project started by Google in 2014. It helps in automating deployment, scaling and operations of application containers across clusters of hosts.

In this blog, we will go through steps to setup Kubernetes cluster on AWS with 1 master and 2 minion nodes. After going through this blog you will be able to set up a working Kubernetes cluster on AWS for development and testing environment.

k8s-aws

Setup Kubernetes Cluster on AWS EC2:

  1. You can either setup AWSCLI on your local machine or launch a new EC2 instance with IAM role that has administrator access. In thistutorial I will be using AWS EC2 instances for setting up Kubernetes cluster.
    • Create a new role with Administrator Access. Note: Workstation requires administrator access to create new IAM roles (i.e. kubernetes-master and kubernetes-minion) that are assigned to newly created master and minions nodes.
    • Launch t2.micro instance with Amazon Linux AMI and IAM role we created in the previous step. I am using Amazon Linux AMI, because it has awscli tools installed by default.
    • You can download kubernetes setup files and extract them as shown below:
    wget https://storage.googleapis.com/kubernetes-release/release/v1.1.3/kubernetes.tar.gz
    tar -xzvf kubernetes.tar.gz
  2. To spin-up Kubernetes cluster on AWS, we will runkube-up.sh script. It uses kubernetes/cluster/aws/util.sh file for setting up AWS and it uses the values of variables defined in kubernetes/cluster/aws/config-default.sh. You can modify below-mentioned properties in theconfig-default.sh file:
    • The number of master and minion nodes, their instances types, tags associated with them, S3 bucket and region to store setup files, availability zone and region to be used.
    • Cluster IP Range, Master IP Range, DNS IP, enable or disable Logging, Elastic Search, Monitoring, Web UI, Kibana
      ZONE=${KUBE_AWS_ZONE:-us-east-1c}
      MASTER_SIZE=${MASTER_SIZE:-t2.small}
      MINION_SIZE=${MINION_SIZE:-t2.small}
      NUM_MINIONS=${NUM_MINIONS:-2}
      AWS_S3_BUCKET=neerajg.in
      AWS_S3_REGION=${AWS_S3_REGION:-us-east-1}
      DOCKER_STORAGE=${DOCKER_STORAGE:-aufs}
      INSTANCE_PREFIX="${KUBE_AWS_INSTANCE_PREFIX:-kubernetes}"
      CLUSTER_ID=${INSTANCE_PREFIX}
      AWS_SSH_KEY=${AWS_SSH_KEY:-$HOME/.ssh/kube_aws_rsa}
      IAM_PROFILE_MASTER="kubernetes-master"
      IAM_PROFILE_MINION="kubernetes-minion"
      LOG="/dev/null"
      MASTER_DISK_TYPE="${MASTER_DISK_TYPE:-gp2}"
      MASTER_DISK_SIZE=${MASTER_DISK_SIZE:-20}
      MASTER_ROOT_DISK_TYPE="${MASTER_ROOT_DISK_TYPE:-gp2}"
      MASTER_ROOT_DISK_SIZE=${MASTER_ROOT_DISK_SIZE:-8}
      MINION_ROOT_DISK_TYPE="${MINION_ROOT_DISK_TYPE:-gp2}"
      MINION_ROOT_DISK_SIZE=${MINION_ROOT_DISK_SIZE:-20}
      MASTER_NAME="${INSTANCE_PREFIX}-master"
      MASTER_TAG="${INSTANCE_PREFIX}-master"
      MINION_TAG="${INSTANCE_PREFIX}-minion"
      MINION_SCOPES=""
      POLL_SLEEP_INTERVAL=3
      SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16"  # formerly PORTAL_NET
      CLUSTER_IP_RANGE="${CLUSTER_IP_RANGE:-10.244.0.0/16}"
      MASTER_IP_RANGE="${MASTER_IP_RANGE:-10.246.0.0/24}"
      MASTER_RESERVED_IP="${MASTER_RESERVED_IP:-auto}"
      ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}"
      ENABLE_NODE_LOGGING="${KUBE_ENABLE_NODE_LOGGING:-true}"
      LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-elasticsearch}" # options: elasticsearch, gcp
      ENABLE_CLUSTER_LOGGING="${KUBE_ENABLE_CLUSTER_LOGGING:-true}"
      ELASTICSEARCH_LOGGING_REPLICAS=1
      if [[ ${KUBE_ENABLE_INSECURE_REGISTRY:-false} == "true" ]]; then
        EXTRA_DOCKER_OPTS="--insecure-registry 10.0.0.0/8"
      fi
      ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
      DNS_SERVER_IP="10.0.0.10"
      DNS_DOMAIN="cluster.local"
      DNS_REPLICAS=1
      ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
      ADMISSION_CONTROL=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
      ENABLE_MINION_PUBLIC_IP=${KUBE_ENABLE_MINION_PUBLIC_IP:-true}
      KUBE_OS_DISTRIBUTION="${KUBE_OS_DISTRIBUTION:-vivid}"
      KUBE_MINION_IMAGE="${KUBE_MINION_IMAGE:-}"
      COREOS_CHANNEL="${COREOS_CHANNEL:-alpha}"
      CONTAINER_RUNTIME="${KUBE_CONTAINER_RUNTIME:-docker}"
      RKT_VERSION="${KUBE_RKT_VERSION:-0.5.5}"
  3. The main script that does all the magic is kubernetes/cluster/aws/util.sh, it setups VPC, Subnet, IGW, Route tables, Security Groups, SSH keys, creates IAM profiles, attach IAM profiles, launch master instance with user-data, create launch configuration with user-data for minions, create auto scaling group, allocate EIP, assign EIP etc.
      • Below screenshot shows launch configuration and auto scaling group commands used in the util.sh file. These commands use variables we defined in config-default.sh file earlier.

    k8s-launchconfiguration-asg

  4. Spinning up Kubernetes cluster
      • To spin up a Kubernete cluster in AWS, first of all, export Kubernetes provider and execute kube-up.sh script as shown below:
    $ export KUBERNETES_PROVIDER=aws
    $ bash cluster/kube-up.sh
      • This script will create a VPC with network CIDR 172.20.0.0/16, IGW, Subnet, Routing Table, Security Groups, Tags, Launch Configuration for Minions, Auto Scaling Group for Minions, Keys and Kubernetes Master. This script will take a few minutes to complete and at the end it will display out as shown below:

    k8s-launch-final-output

      • If you want to manage Kubernetes cluster from your EC2 workstation, you need to setup kubectl on the workstation. For setting up kubectl on workstation add the kubectl binaries to your PATH as shown below:
      • Alternatively, you can find master and minions ssh keys (/root/ssh/kube_aws_rsa) on your workstation. You can connect to master or minions using their public IP’s and private key mentioned earlier.
    $ export PATH=/home/ec2-user/kubernetes/platforms/linux/amd64:$PATH
    $ kubectl get nodes
    $ kubectl get pods --namespace=kube-system
    $ kubectl run ttnd-nginx --image=nginx
    $ kubectl get pods

    k8s-kubectl-cli-example

  5. Checking AWS console
      • I started Kubernete cluster with 1 master and 2 minions, we can all of them running in the AWS EC2 console below:

    k8s-aws-ec2

      • When we setup Kubernetes cluster using kube-up.sh script, it automatically adds routes to the default route table. Kubernetes Master nodes use a separate network and Minion nodes use another network for services discovery and pods. Find below IP addressing used in the Kubernetes cluster setup:
        Master IP Range: 10.246.0.0/24
        Cluster IP Range: 10.244.0.0/16
        Minion 1: 10.244.0.0/24
        Minion 2: 10.244.1.0/24
        k8s-aws-rt

        Note: If we add another minion to the cluster it will get 10.244.2.0/24 Cluster IP range and it route will be automatically added to the default route table.
      • Any request from Master to Cluster IP range ((10.244.0.0/24)) is routed to Minion 1 ENI and any request from any Minion (i.e. Minion 1:10.244.0.0/24, Minion 2: 10.244.1.0/24) to Master IP range (10.246.0.0/24) is routed to Master ENI.
      • Lastly, we can access Kubernetes web UI using https://master-public-ip/ui. It will ask for username and password which can be found in /root/.kube/config and /srv/kubernetes/basic_auth.csv files.

    k8s-web-ui

In the next blog post, I will show you how to setup High Available (HA) Kubernetes Cluster on AWS.

Tag -

FOUND THIS USEFUL? SHARE IT

comments (3)

  1. Amit NaudiyalAmit Naudiyal

    Thanks for this writeup. Looking forward for Highly Available Master and Minion setup along with the way to connect Master if kept in Private subnet.

    Reply
    1. Carmen

      It’s worth noting that, if you flolow their advice of rebooting your instances manually, the scheduled event icon doesn’t go away immediately. In fact, I’ve been waiting a few hours now and it’s still there on an instance I rebooted manually. AWS EC2 forums are awash with people waiting >24 hours for it the disappear, believing that it won’t and AWS will reboot it again. Which of course is annoying a few people who have to manually intervene with their instances when they don’t come back up on their own.

      Reply

Leave a comment -