CNE AWS Ready: Understanding EKS Auto Mode
Cloud Native Experience (CNE) uses Amazon EKS Auto Mode to manage Kubernetes node scaling automatically.
EKS Auto Mode scales cluster nodes when workloads require additional compute, memory, storage, or networking resources. Internally, it uses Karpenter to provision and terminate nodes dynamically based on cluster demand.
This approach reduces operational overhead and allows the platform to adjust capacity automatically as application workloads change.
Why CNE Uses Auto Mode
CNE uses Auto Mode as part of its recommended deployment architecture.
Compared to manually managing auto-scaling tools such as Karpenter or Cluster Autoscaler, Auto Mode provides:
- Fully managed scaling maintained by AWS
- Automatic node provisioning based on workload demand
- Integrated security and life cycle management
This approach reduces operational complexity and aligns with the CNE “Golden Path” deployment model.
EKS Auto Mode adds an additional 12% cost to EC2 compute resources used by the cluster.
Node Pools
When Auto Mode is enabled, Amazon EKS automatically creates two node pools:
system: runs core Kubernetes componentsgeneral-purpose: runs application workloads
CNE workloads run in the general-purpose node pool by default.
Storage Class Behavior
EKS Auto Mode requires a compatible storage provisioner.
CNE configures the gp3 storage class as the default because it supports Auto Mode scaling.
The storage class uses the ebs.csi.eks.amazonaws.com provisioner.
gp2 Storage Class
Amazon EKS automatically creates a gp2 storage class in the cluster.
However, the default gp2 configuration uses the kubernetes.io/aws-ebs provisioner. This provisioner is not compatible with Auto Mode.
If a workload attempts to use the default gp2 storage class, pods may fail to schedule.
Detecting Auto Mode Scaling Events
When the cluster scales nodes, Kubernetes events may show messages from
eks-auto-mode/*
For example:
FailedScheduling ... eks-auto-mode/compute
These events indicate that Auto Mode is evaluating cluster capacity and may provision additional nodes to schedule pending workloads.