Setup Kubernetes Cluster on AWS EC2

13 / Jan / 2016 by Neeraj Gupta 12 comments

k8s-logo

What is Kubernetes / K8s?

The name Kubernetes originates from Greek, meaning “helmsman” or “pilot”, and is the root of “governor” and “cybernetic”. The Kubernetes is an open-source project started by Google in 2014. It helps in automating deployment, scaling and operations of application containers across clusters of hosts.

In this blog, we will go through steps to setup Kubernetes cluster on AWS with 1 master and 2 minion nodes. After going through this blog you will be able to set up a working Kubernetes cluster on AWS for development and testing environment.

k8s-aws

Setup Kubernetes Cluster on AWS EC2:

  1. You can either setup AWSCLI on your local machine or launch a new EC2 instance with IAM role that has administrator access. In thistutorial I will be using AWS EC2 instances for setting up Kubernetes cluster.
    • Create a new role with Administrator Access. Note: Workstation requires administrator access to create new IAM roles (i.e. kubernetes-master and kubernetes-minion) that are assigned to newly created master and minions nodes.
    • Launch t2.micro instance with Amazon Linux AMI and IAM role we created in the previous step. I am using Amazon Linux AMI, because it has awscli tools installed by default.
    • You can download kubernetes setup files and extract them as shown below:

    [js]wget https://storage.googleapis.com/kubernetes-release/release/v1.1.3/kubernetes.tar.gz
    tar -xzvf kubernetes.tar.gz[/js]

  2. To spin-up Kubernetes cluster on AWS, we will runkube-up.sh script. It uses kubernetes/cluster/aws/util.sh file for setting up AWS and it uses the values of variables defined in kubernetes/cluster/aws/config-default.sh. You can modify below-mentioned properties in theconfig-default.sh file:
    • The number of master and minion nodes, their instances types, tags associated with them, S3 bucket and region to store setup files, availability zone and region to be used.
    • Cluster IP Range, Master IP Range, DNS IP, enable or disable Logging, Elastic Search, Monitoring, Web UI, Kibana

      [js]ZONE=${KUBE_AWS_ZONE:-us-east-1c}
      MASTER_SIZE=${MASTER_SIZE:-t2.small}
      MINION_SIZE=${MINION_SIZE:-t2.small}
      NUM_MINIONS=${NUM_MINIONS:-2}
      AWS_S3_BUCKET=neerajg.in
      AWS_S3_REGION=${AWS_S3_REGION:-us-east-1}
      DOCKER_STORAGE=${DOCKER_STORAGE:-aufs}
      INSTANCE_PREFIX="${KUBE_AWS_INSTANCE_PREFIX:-kubernetes}"
      CLUSTER_ID=${INSTANCE_PREFIX}
      AWS_SSH_KEY=${AWS_SSH_KEY:-$HOME/.ssh/kube_aws_rsa}
      IAM_PROFILE_MASTER="kubernetes-master"
      IAM_PROFILE_MINION="kubernetes-minion"
      LOG="/dev/null"
      MASTER_DISK_TYPE="${MASTER_DISK_TYPE:-gp2}"
      MASTER_DISK_SIZE=${MASTER_DISK_SIZE:-20}
      MASTER_ROOT_DISK_TYPE="${MASTER_ROOT_DISK_TYPE:-gp2}"
      MASTER_ROOT_DISK_SIZE=${MASTER_ROOT_DISK_SIZE:-8}
      MINION_ROOT_DISK_TYPE="${MINION_ROOT_DISK_TYPE:-gp2}"
      MINION_ROOT_DISK_SIZE=${MINION_ROOT_DISK_SIZE:-20}
      MASTER_NAME="${INSTANCE_PREFIX}-master"
      MASTER_TAG="${INSTANCE_PREFIX}-master"
      MINION_TAG="${INSTANCE_PREFIX}-minion"
      MINION_SCOPES=""
      POLL_SLEEP_INTERVAL=3
      SERVICE_CLUSTER_IP_RANGE="10.0.0.0/16" # formerly PORTAL_NET
      CLUSTER_IP_RANGE="${CLUSTER_IP_RANGE:-10.244.0.0/16}"
      MASTER_IP_RANGE="${MASTER_IP_RANGE:-10.246.0.0/24}"
      MASTER_RESERVED_IP="${MASTER_RESERVED_IP:-auto}"
      ENABLE_CLUSTER_MONITORING="${KUBE_ENABLE_CLUSTER_MONITORING:-influxdb}"
      ENABLE_NODE_LOGGING="${KUBE_ENABLE_NODE_LOGGING:-true}"
      LOGGING_DESTINATION="${KUBE_LOGGING_DESTINATION:-elasticsearch}" # options: elasticsearch, gcp
      ENABLE_CLUSTER_LOGGING="${KUBE_ENABLE_CLUSTER_LOGGING:-true}"
      ELASTICSEARCH_LOGGING_REPLICAS=1
      if [[ ${KUBE_ENABLE_INSECURE_REGISTRY:-false} == "true" ]]; then
      EXTRA_DOCKER_OPTS="–insecure-registry 10.0.0.0/8"
      fi
      ENABLE_CLUSTER_DNS="${KUBE_ENABLE_CLUSTER_DNS:-true}"
      DNS_SERVER_IP="10.0.0.10"
      DNS_DOMAIN="cluster.local"
      DNS_REPLICAS=1
      ENABLE_CLUSTER_UI="${KUBE_ENABLE_CLUSTER_UI:-true}"
      ADMISSION_CONTROL=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota
      ENABLE_MINION_PUBLIC_IP=${KUBE_ENABLE_MINION_PUBLIC_IP:-true}
      KUBE_OS_DISTRIBUTION="${KUBE_OS_DISTRIBUTION:-vivid}"
      KUBE_MINION_IMAGE="${KUBE_MINION_IMAGE:-}"
      COREOS_CHANNEL="${COREOS_CHANNEL:-alpha}"
      CONTAINER_RUNTIME="${KUBE_CONTAINER_RUNTIME:-docker}"
      RKT_VERSION="${KUBE_RKT_VERSION:-0.5.5}"[/js]

  3. The main script that does all the magic is kubernetes/cluster/aws/util.sh, it setups VPC, Subnet, IGW, Route tables, Security Groups, SSH keys, creates IAM profiles, attach IAM profiles, launch master instance with user-data, create launch configuration with user-data for minions, create auto scaling group, allocate EIP, assign EIP etc.
      • Below screenshot shows launch configuration and auto scaling group commands used in the util.sh file. These commands use variables we defined in config-default.sh file earlier.

    k8s-launchconfiguration-asg

  4. Spinning up Kubernetes cluster
      • To spin up a Kubernete cluster in AWS, first of all, export Kubernetes provider and execute kube-up.sh script as shown below:

    [js]$ export KUBERNETES_PROVIDER=aws
    $ bash cluster/kube-up.sh[/js]

      • This script will create a VPC with network CIDR 172.20.0.0/16, IGW, Subnet, Routing Table, Security Groups, Tags, Launch Configuration for Minions, Auto Scaling Group for Minions, Keys and Kubernetes Master. This script will take a few minutes to complete and at the end it will display out as shown below:

    k8s-launch-final-output

      • If you want to manage Kubernetes cluster from your EC2 workstation, you need to setup kubectl on the workstation. For setting up kubectl on workstation add the kubectl binaries to your PATH as shown below:
      • Alternatively, you can find master and minions ssh keys (/root/ssh/kube_aws_rsa) on your workstation. You can connect to master or minions using their public IP’s and private key mentioned earlier.

    [js]$ export PATH=/home/ec2-user/kubernetes/platforms/linux/amd64:$PATH
    $ kubectl get nodes
    $ kubectl get pods –namespace=kube-system
    $ kubectl run ttnd-nginx –image=nginx
    $ kubectl get pods[/js]

    k8s-kubectl-cli-example

  5. Checking AWS console
      • I started Kubernete cluster with 1 master and 2 minions, we can all of them running in the AWS EC2 console below:

    k8s-aws-ec2

      • When we setup Kubernetes cluster using kube-up.sh script, it automatically adds routes to the default route table. Kubernetes Master nodes use a separate network and Minion nodes use another network for services discovery and pods. Find below IP addressing used in the Kubernetes cluster setup:
        Master IP Range: 10.246.0.0/24
        Cluster IP Range: 10.244.0.0/16
        Minion 1: 10.244.0.0/24
        Minion 2: 10.244.1.0/24
        k8s-aws-rt

        Note: If we add another minion to the cluster it will get 10.244.2.0/24 Cluster IP range and it route will be automatically added to the default route table.
      • Any request from Master to Cluster IP range ((10.244.0.0/24)) is routed to Minion 1 ENI and any request from any Minion (i.e. Minion 1:10.244.0.0/24, Minion 2: 10.244.1.0/24) to Master IP range (10.246.0.0/24) is routed to Master ENI.
      • Lastly, we can access Kubernetes web UI using https://master-public-ip/ui. It will ask for username and password which can be found in /root/.kube/config and /srv/kubernetes/basic_auth.csv files.

    k8s-web-ui

In the next blog post, I will show you how to setup High Available (HA) Kubernetes Cluster on AWS.

FOUND THIS USEFUL? SHARE IT

comments (12)

  1. Jayaram J S

    I am getting the below error when I am trying to run kube-up.sh

    Attempt 1 to check for SSH to master [ssh to master working]
    Attempt 1 to check for salt-master [salt-master not working yet]
    Attempt 2 to check for salt-master [salt-master not working yet]
    Attempt 3 to check for salt-master [salt-master not working yet]
    Attempt 4 to check for salt-master [salt-master not working yet]
    Attempt 5 to check for salt-master [salt-master not working yet]

    Any idea how to fix?

    Reply
  2. Ashish Karpe

    ubuntu@ip-172-31-22-20:~/kube/aws/kubernetes/cluster$ ./kube-up.sh
    … Starting cluster using provider: aws
    … calling verify-prereqs
    … calling kube-up
    Starting cluster using os distro: vivid
    Uploading to Amazon S3
    +++ Staging server tars to S3 Storage: kubernetes-staging-c920e9ee3b17fb263b849faee9724aa0/devel
    INSTANCEPROFILE arn:aws:iam::688241667475:instance-profile/kubernetes-master 2018-04-26T12:09:02Z AIPAI2JA3UN4FPYN6OY5Q kubernetes-master /
    ROLES arn:aws:iam::688241667475:role/kubernetes-master 2018-04-26T12:09:00Z / AROAIEJIPUSOP3WO6YJOU kubernetes-master
    ASSUMEROLEPOLICYDOCUMENT 2012-10-17
    STATEMENT sts:AssumeRole Allow
    PRINCIPAL ec2.amazonaws.com
    INSTANCEPROFILE arn:aws:iam::688241667475:instance-profile/kubernetes-minion 2018-04-26T12:09:05Z AIPAJD7CF5XMILHBXNYTA kubernetes-minion /
    ROLES arn:aws:iam::688241667475:role/kubernetes-minion 2018-04-26T12:09:04Z / AROAID4B4I22ODKCFOQM2 kubernetes-minion
    ASSUMEROLEPOLICYDOCUMENT 2012-10-17
    STATEMENT sts:AssumeRole Allow
    PRINCIPAL ec2.amazonaws.com
    Using SSH key with (AWS) fingerprint: 58:19:32:a4:ca:22:fb:54:27:69:ca:23:6b:21:bd:5c
    Creating vpc.
    Adding tag to vpc-626a4b1b: Name=kubernetes-vpc
    Adding tag to vpc-626a4b1b: KubernetesCluster=kubernetes
    Using VPC vpc-626a4b1b
    Creating subnet.
    Adding tag to subnet-6c9d3027: KubernetesCluster=kubernetes
    Using subnet subnet-6c9d3027
    Creating Internet Gateway.
    Using Internet Gateway igw-825828e4
    Associating route table.
    Creating route table
    Adding tag to rtb-6097b018: KubernetesCluster=kubernetes
    Associating route table rtb-6097b018 to subnet subnet-6c9d3027
    Adding route to route table rtb-6097b018
    Using Route Table rtb-6097b018
    Creating master security group.
    Creating security group kubernetes-master-kubernetes.
    Adding tag to sg-96004fe8: KubernetesCluster=kubernetes
    Creating minion security group.
    Creating security group kubernetes-minion-kubernetes.
    Adding tag to sg-eb004f95: KubernetesCluster=kubernetes
    Using master security group: kubernetes-master-kubernetes sg-96004fe8
    Using minion security group: kubernetes-minion-kubernetes sg-eb004f95
    Starting Master
    Adding tag to i-012a167d73d1a5cea: Name=kubernetes-master
    Adding tag to i-012a167d73d1a5cea: Role=kubernetes-master
    Adding tag to i-012a167d73d1a5cea: KubernetesCluster=kubernetes
    Waiting for master to be ready
    Attempt 1 to check for master nodeWaiting for instance i-012a167d73d1a5cea to spawn
    Sleeping for 3 seconds…
    Waiting for instance i-012a167d73d1a5cea to spawn
    Sleeping for 3 seconds…
    Waiting for instance i-012a167d73d1a5cea to spawn
    Sleeping for 3 seconds…
    [master running @18.236.200.67]
    Attaching persistent data volume (vol-06f3ca1b18ecce9f5) to master
    {
    “AttachTime”: “2018-04-26T13:00:23.668Z”,
    “InstanceId”: “i-012a167d73d1a5cea”,
    “VolumeId”: “vol-06f3ca1b18ecce9f5”,
    “State”: “attaching”,
    “Device”: “/dev/sdb”
    }
    Attempt 1 to check for SSH to master [ssh to master working]
    Attempt 1 to check for salt-master [salt-master not working yet]
    Attempt 2 to check for salt-master [salt-master not working yet]
    Attempt 3 to check for salt-master [salt-master not working yet]
    Attempt 4 to check for salt-master [salt-master not working yet]
    Attempt 5 to check for salt-master [salt-master not working yet]
    Attempt 6 to check for salt-master [salt-master not working yet]
    Attempt 7 to check for salt-master [salt-master not working yet]
    Attempt 8 to check for salt-master [salt-master not working yet]
    Attempt 9 to check for salt-master [salt-master not working yet]
    Attempt 10 to check for salt-master [salt-master not working yet]
    Attempt 11 to check for salt-master [salt-master not working yet]
    Attempt 12 to check for salt-master [salt-master not working yet]
    Attempt 13 to check for salt-master [salt-master not working yet]
    Attempt 14 to check for salt-master [salt-master not working yet]
    Attempt 15 to check for salt-master [salt-master not working yet]
    Attempt 16 to check for salt-master [salt-master not working yet]
    Attempt 17 to check for salt-master [salt-master not working yet]
    Attempt 18 to check for salt-master [salt-master not working yet]
    Attempt 19 to check for salt-master [salt-master not working yet]
    Attempt 20 to check for salt-master [salt-master not working yet]
    Attempt 21 to check for salt-master [salt-master not working yet]
    Attempt 22 to check for salt-master [salt-master not working yet]
    Attempt 23 to check for salt-master [salt-master not working yet]
    Attempt 24 to check for salt-master [salt-master not working yet]
    Attempt 25 to check for salt-master [salt-master not working yet]
    Attempt 26 to check for salt-master [salt-master not working yet]
    Attempt 27 to check for salt-master [salt-master not working yet]
    Attempt 28 to check for salt-master [salt-master not working yet]
    Attempt 29 to check for salt-master [salt-master not working yet]
    Attempt 30 to check for salt-master [salt-master not working yet]
    Attempt 31 to check for salt-master [salt-master not working yet]
    Attempt 32 to check for salt-master
    (Failed) output was:

    salt-master failed to start on 18.236.200.67. Your cluster is unlikely
    to work correctly. Please run ./cluster/kube-down.sh and re-create the
    cluster. (sorry!)

    Reply
  3. Ganga

    Hi Neeraj,
    I was trying to setup kubernetes in AWS using Ubuntu AMI with the steps provided. But I am facing the below issue. Could you please help?

    Attempt 1 to check for SSH to master [ssh to master working]
    Attempt 1 to check for salt-master [salt-master not working yet]
    Attempt 2 to check for salt-master [salt-master not working yet]
    Attempt 3 to check for salt-master [salt-master not working yet]
    Attempt 4 to check for salt-master [salt-master not working yet]
    Attempt 5 to check for salt-master [salt-master not working yet]
    Attempt 6 to check for salt-master [salt-master not working yet]
    Attempt 7 to check for salt-master [salt-master not working yet]
    Attempt 8 to check for salt-master [salt-master not working yet]
    Attempt 9 to check for salt-master [salt-master not working yet]
    Attempt 26 to check for salt-master [salt-master not working yet]
    Attempt 27 to check for salt-master [salt-master not working yet]
    Attempt 28 to check for salt-master [salt-master not working yet]
    Attempt 29 to check for salt-master [salt-master not working yet]
    Attempt 30 to check for salt-master [salt-master not working yet]
    Attempt 31 to check for salt-master [salt-master not working yet]
    Attempt 32 to check for salt-master
    (Failed) output was:

    salt-master failed to start on 52.203.206.214. Your cluster is unlikely
    to work correctly. Please run ./cluster/kube-down.sh and re-create the
    cluster. (sorry!)

    Reply
  4. Deep

    Hi Neeraj,

    Nice documentation and well explanation. Can you please suggest the default-config file for AWS free tier(aws.amazon.com/free/) where I have limited resources? If I would like to run a single master and a single minion in AWS with t2-micro instances and I have total 20GB disk- limitation.

    Reply
  5. deepak

    Very well documented and fully working stuff.
    Thanks ,keep posting looking further some more advanced setup and configuration.

    Reply
    1. deepak

      Hi , can’t access ui for kubernets,

      all services are not getting up ..
      Gives this at end
      Waiting for cluster initialization.

      This will continually check to see if the API for kubernetes is reachable.
      This might loop forever if there was some uncaught error during start
      up.

      .what should i do to troubleshoot the isuue

      Reply
        1. Phanindra

          As the official documentation says –
          “kube-up.sh is a legacy tool that is an easy way to spin up a cluster. This tool is being deprecated, and does not create a production ready environment.”

          Please use “kops”

          Reply
  6. Amit Naudiyal

    Thanks for this writeup. Looking forward for Highly Available Master and Minion setup along with the way to connect Master if kept in Private subnet.

    Reply
    1. Carmen

      It’s worth noting that, if you flolow their advice of rebooting your instances manually, the scheduled event icon doesn’t go away immediately. In fact, I’ve been waiting a few hours now and it’s still there on an instance I rebooted manually. AWS EC2 forums are awash with people waiting >24 hours for it the disappear, believing that it won’t and AWS will reboot it again. Which of course is annoying a few people who have to manually intervene with their instances when they don’t come back up on their own.

      Reply

Leave a Reply to Carmen Cancel reply

Your email address will not be published. Required fields are marked *