devops

Amazon Elastic Kubernetes Service (EKS)

Prerequisites, EKS cluster + node group, kubeconfig, deploy the voting app on AWS


EKS runs the Kubernetes control plane in AWS; you add managed node groups (or Fargate profiles) for workloads. You pay for the control plane per hour plus worker EC2 (or Fargate) usage.

Prerequisites

EKS prerequisites checklist

  • An AWS account and permission to create EKS clusters, IAM roles, and VPC networking.
  • kubectl installed locally (or use CloudShell).
  • AWS CLI v2 configured (aws configure) with credentials that can call eks, iam, and ec2.
  • For the console flow: an EKS cluster IAM role, a node instance role for the node group, a VPC with subnets (often you start from the EKS wizard defaults), and optionally an EC2 key pair if you want SSH to nodes (not required for kubectl-only workflows).

Install or verify AWS CLI

Terminal window
aws --version

On macOS you can install the v2 package from AWS; on Linux use your distro package or the official zip. After install, run aws configure.

Point kubectl at EKS (after cluster exists)

Many tutorials keep kubectl in ~/bin:

Terminal window
mkdir -p "$HOME/bin"
# place kubectl binary in ~/bin if needed
export PATH="$PATH:$HOME/bin"
kubectl version --client

Create the cluster (console outline)

  1. Services → Elastic Kubernetes Service → Add cluster → Create.
  2. Cluster configuration: name (e.g. example-voting-app), Kubernetes version, cluster IAM role, optional secrets encryption.
  3. Networking: pick a VPC and subnets that can reach the EKS API and the internet (or your private design).
  4. Review and Create; wait until the cluster is Active.

EKS cluster configuration

EKS networking — VPC and subnets

Add a node group

  1. Open your cluster → Compute → Add node group.
  2. Name the group, attach the node IAM role, choose subnets.
  3. Set AMI family, instance type, disk, scaling (min/max/desired).
  4. Create and wait until nodes join.

Node group — name, role, subnets

Node group — compute configuration

Active node group summary

Configure kubeconfig

Replace region and cluster name with yours:

Terminal window
aws eks --region us-west-2 update-kubeconfig --name example-voting-app
kubectl get nodes

You should see your managed nodes. The control plane is not listed as nodes—it is operated by AWS.

Deploy the voting application

Clone the public sample (not any private fork):

Terminal window
git clone https://github.com/dockersamples/example-voting-app.git --depth 1
cd example-voting-app/k8s-specifications

Apply manifests in dependency order (data stores before worker, then apps), for example:

Terminal window
kubectl apply -f redis-deployment.yaml -f redis-service.yaml
kubectl apply -f db-deployment.yaml -f db-service.yaml
kubectl wait --for=condition=available deployment/db --timeout=180s
kubectl apply -f worker-deployment.yaml
kubectl apply -f vote-deployment.yaml -f vote-service.yaml
kubectl apply -f result-deployment.yaml -f result-service.yaml

Switch vote / result Services to type: LoadBalancer if your copy still uses NodePort and you want AWS ELB-style URLs (same idea as the GKE note).

Terminal window
kubectl get deployments,svc

When EXTERNAL-IP or hostname is set on the LoadBalancers, open the voting UI:

Cats vs dogs vote page on AWS

Cleanup

Delete the node group, then the cluster, and remove orphaned load balancers / security groups so costs stop.