You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
How can the docs be improved?
While trying to install Karpenter in a recently created EKS cluster, I'm finding conflicting installation instructions that I feel could be better clarified as I'm have to reference multiple guides just to install Karpenter and I'm unsure what is correct and still valid.
Currently, there are only two sets of instructions provided on how to install Karpenter, Getting Started with Karpenter and Migrating from Cluster Autoscaler. However, I'm just trying to install Karpenter on an existing EKS cluster where Cluster Autoscaler has never been installed. The Getting Started with Karpenter guide won't work as it assumes you are creating a new cluster from scratch and the eksctl command provided will fail if the cluster already exits. That leaves the Migrating from Cluster Autoscaler guide, which seems counter as I'm not migrating away from Cluster Autoscaler.
Step 6 of the v1 Migration Upgrade Procedure Guide references # Service account annotation can be dropped when using pod identity; however, neither of the Getting Started guides references this and still directs you to use/update the aws-auth ConfigMap. This should be included in Getting Started as this can be setup from the start and doesn't require an upgrade to implement (3. Create a Cluster and Update aws-auth ConfigMap).
Getting Started with Karpenter | Deploy Karpenter only provides guidance on installing Karpenter by making a local copy of the Karpenter Manifest and modifying it to specify an existing NodeGroup to use. Uncertain why there are no instructions on how to also install Karpenter using Helm like Getting Started with Karpenter | Install Karpenter; and an alternative method to use a Fargate Profile instead of existing NodeGroup if there aren't any. I also noticed the following kubectl create commands are failing on a new 1.31 EKS cluster as the resources already exist so they can't be created again:
@antmatts We'd love help getting our docs more up-to-date. In general, given the issues that come in and the features that we are working on, it's tough for us to keep-up with it all. We'd love help making our docs clearer so if you have something that you want to propose a change for, I'd recommend opening a PR!
@jonathan-innis I've been fighting to get this setup on an existing cluster, but finally got it working. Making some notes and can help put something together. Just to confirm, what do you mean by PR in this context?
Description
How can the docs be improved?
While trying to install Karpenter in a recently created EKS cluster, I'm finding conflicting installation instructions that I feel could be better clarified as I'm have to reference multiple guides just to install Karpenter and I'm unsure what is correct and still valid.
Getting Started with Karpenter
guide won't work as it assumes you are creating a new cluster from scratch and theeksctl
command provided will fail if the cluster already exits. That leaves theMigrating from Cluster Autoscaler
guide, which seems counter as I'm not migrating away from Cluster Autoscaler.# Service account annotation can be dropped when using pod identity
; however, neither of the Getting Started guides references this and still directs you to use/update theaws-auth
ConfigMap. This should be included in Getting Started as this can be setup from the start and doesn't require an upgrade to implement (3. Create a Cluster and Update aws-auth ConfigMap).kubectl create
commands are failing on a new 1.31 EKS cluster as the resources already exist so they can't be created again:I instead ran them as
kubectl apply
to update the existing resources as I'm unable to validate what versions are included with EKS 1.31 by default.The text was updated successfully, but these errors were encountered: