Implementation of AWS EKS Node Group Using Terraform

05 / Jul / 2022 by kaushlendra.singh 0 comments

Implementation of AWS EKS Node Group Using Terraform

Manages an EKS Node Group, which can provision and optionally update an Auto Scaling Group of Kubernetes worker nodes compatible with EKS.

Amazon EKS managed node groups automate the provisioning and lifecycle management of nodes (Amazon EC2 instances) for Amazon EKS Kubernetes clusters.

With Amazon EKS managed node groups, you don’t need to separately provision or register the Amazon EC2 instances that provide compute capacity to run your Kubernetes applications. You can create, automatically update, or terminate nodes for your cluster with a single operation. Node updates and terminations automatically drain nodes to ensure that your applications stay available.

Every managed node is provisioned as part of an Amazon EC2 Auto Scaling group that’s managed for you by Amazon EKS. Every resource including the instances and Auto Scaling groups runs within your AWS account. Each node group runs across multiple Availability Zones that you define.

Managed node group capacity types

On-Demand

With On-Demand Instances, you pay for compute capacity by the second, with no long-term commitments.

Spot

Amazon EC2 Spot Instances are spare Amazon EC2 capacity that offers steep discounts off of On-Demand prices. Amazon EC2 Spot Instances can be interrupted with a two-minute interruption notice when EC2 needs the capacity back. For more information, see Spot Instances in the Amazon EC2 User Guide for Linux Instances. You can configure a managed node group with Amazon EC2 Spot Instances to optimize costs for the compute nodes running in your Amazon EKS cluster.

Example Usage

resource "aws_eks_node_group" "example" {

  cluster_name    = aws_eks_cluster.example.name

  node_group_name = "example"

  node_role_arn   = aws_iam_role.example.arn

  subnet_ids      = aws_subnet.example[*].id

  scaling_config {

    desired_size = 1

    max_size     = 1

    min_size     = 1

  }

  update_config {

    max_unavailable = 2

  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.

  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.

  depends_on = [

    aws_iam_role_policy_attachment.example-AmazonEKSWorkerNodePolicy,

    aws_iam_role_policy_attachment.example-AmazonEKS_CNI_Policy,

    aws_iam_role_policy_attachment.example-AmazonEC2ContainerRegistryReadOnly,

  ]

}

Code used in an existing project.

resource "aws_eks_node_group" "aws_eks_node_group" {

  cluster_name    ="${var.environment}_${var.cluster_name}"

  node_group_name = "${var.environment}-worker"

  node_role_arn   = aws_iam_role.eks_worker_role.arn

  subnet_ids      = "${concat(var.private_subnets)}"

  scaling_config {

    

    desired_size = 2

    max_size     = 3

    min_size     = 1

  }

  update_config {

    max_unavailable = 2

  }

  tags = {

    "Name"   = "example abc"

    "SecurityClass" = "example abc"

    "Customers" = "example abc"

    "ServiceLevel" ="example abc"

    "Billable" = "example abc"

    "RemedyGroup" = "example abc"

    "Function" = "example abc"

    "ProductCode" = "example abc"

  }

  # Ensure that IAM Role permissions are created before and deleted after EKS Node Group handling.

  # Otherwise, EKS will not be able to properly delete EC2 Instances and Elastic Network Interfaces.

  depends_on = [

    aws_iam_role_policy_attachment.AmazonEKSWorkerNodePolicy,

    aws_iam_role_policy_attachment.AmazonEKS_CNI_Policy,

    aws_iam_role_policy_attachment.AmazonEC2ContainerRegistryReadOnly,

  ]

}

Ignoring Changes to Desired Size

You can utilize the generic Terraform resource lifecycle configuration block with ignore_changes to create an EKS Node Group with an initial size of running instances, then ignore any changes to that count caused externally (e.g., Application Autoscaling).

resource "aws_eks_node_group" "example" {

  # ... other configurations ...

  scaling_config {

    # Example: Create EKS Node Group with 2 instances to start

    desired_size = 2

    # ... other configurations ...

  }

  # Optional: Allow external changes without Terraform plan difference

  lifecycle {

    ignore_changes = [scaling_config[0].desired_size]

  }

}

Example IAM Role for EKS Node Group

resource "aws_iam_role" "example" {

  name = "eks-node-group-example"

  assume_role_policy = jsonencode({

    Statement = [{

      Action = "sts:AssumeRole"

      Effect = "Allow"

      Principal = {

        Service = "ec2.amazonaws.com"

      }

    }]

    Version = "2012-10-17"

  })

}

resource "aws_iam_role_policy_attachment" "example-AmazonEKSWorkerNodePolicy" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEKSWorkerNodePolicy"

  role       = aws_iam_role.example.name

}

resource "aws_iam_role_policy_attachment" "example-AmazonEKS_CNI_Policy" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEKS_CNI_Policy"

  role       = aws_iam_role.example.name

}

resource "aws_iam_role_policy_attachment" "example-AmazonEC2ContainerRegistryReadOnly" {

  policy_arn = "arn:aws:iam::aws:policy/AmazonEC2ContainerRegistryReadOnly"

  role       = aws_iam_role.example.name

}

Example Subnets for EKS Node Group

data "aws_availability_zones" "available" {

  state = "available"

}

resource "aws_subnet" "example" {

  count = 2

  availability_zone = data.aws_availability_zones.available.names[count.index]

  cidr_block        = cidrsubnet(aws_vpc.example.cidr_block, 8, count.index)

  vpc_id            = aws_vpc.example.id

  tags = {

    "kubernetes.io/cluster/${aws_eks_cluster.example.name}" = "shared"

  }

}
data "aws_availability_zones" "available" {

  state = "available"

}

resource "aws_subnet" "example" {

  count = 2

  availability_zone = data.aws_availability_zones.available.names[count.index]

  cidr_block        = cidrsubnet(aws_vpc.example.cidr_block, 8, count.index)

  vpc_id            = aws_vpc.example.id

  tags = {

    "kubernetes.io/cluster/${aws_eks_cluster.example.name}" = "shared"

  }

}

 

Import

EKS Node Groups can be imported using the cluster_name and node_group_name separated by a colon (:), e.g.,

$ terraform import aws_eks_node_group.my_node_group my_cluster:my_node_group

FOUND THIS USEFUL? SHARE IT

Leave a Reply

Your email address will not be published. Required fields are marked *