Amazon EKS
Managed Kubernetes service for running Kubernetes on AWS
EKS is like having a Kubernetes expert manage your Kubernetes clusters. Kubernetes is powerful but complex: setting up control planes, managing etcd, upgrading versions, ensuring high availability. EKS handles all of this: it runs the Kubernetes control plane (API server, scheduler, controller manager) across multiple AZs, automatically patches and upgrades it, and ensures it's highly available. You just deploy your applications using standard Kubernetes tools (kubectl, Helm), and EKS handles the infrastructure. It's like having a managed Kubernetes service that lets you focus on applications, not cluster management.
EKS runs Kubernetes control plane across 3 AZs for high availability. You create an EKS cluster, and AWS manages the control plane. You manage worker nodes: use EC2 (self-managed or managed node groups) or Fargate (serverless). EKS supports standard Kubernetes APIs and tools.
Key Capabilities
Key features: IAM integration (map IAM roles to Kubernetes RBAC), VPC networking (pods get ENIs), and add-ons (CoreDNS, kube-proxy, VPC CNI).
Gotchas & Constraints
Gotcha #1: EKS charges $0.10/hour per cluster plus worker node costs, and costs can add up for multiple clusters. Gotcha #2: EKS upgrades require planning; test applications before upgrading production clusters. Constraints: Maximum 250 pods per node (depends on instance type and ENI limits; most instances support far fewer), maximum 30 managed node groups per cluster, and cluster upgrade requires downtime for self-managed nodes.
A company runs Kubernetes on-premises but wants to migrate to AWS. Managing Kubernetes is complex; they spend 40% of time on cluster operations. They migrate to EKS: create an EKS cluster, deploy applications using existing Kubernetes manifests, and configure IAM roles for service accounts (pods assume IAM roles for AWS access). They use managed node groups for worker nodes. EKS handles node provisioning, updates, and scaling. They configure Cluster Autoscaler to automatically add/remove nodes based on pod demand. For CI/CD, they use CodePipeline to build Docker images and deploy to EKS using kubectl. They enable CloudWatch Container Insights for monitoring and use AWS Load Balancer Controller for ALB integration. They run 100 microservices across 50 nodes, and EKS handles all control plane operations.
The Result
80% reduction in operational overhead, high availability, and native AWS integration.