AEWS study

5주차 - EKS Autoscaling - #2

haru224 2025. 3. 8. 07:31

CloudNet@ 가시다님이 진행하는 AWS EKS Hands-on Study 내용 참고.

 

6. Karpenter


 

☞ [영상] Karpenter로 극한까지 Amazon EKS 비용 최적화 : 오픈소스 노드 수명 주기 관리 솔루션, 몇 초 만에 컴퓨팅 리소스 제공 - Youtube

  • 소개 : 고성능의 지능형 k8s 컴퓨팅 프로비저닝 및 관리 솔루션, 수초 이내에 대응 가능, 더 낮은 컴퓨팅 비용으로 노드 선택
    • 지능형의 동적인 인스턴스 유형 선택 - Spot, AWS Graviton 등
    • 자동 워크로드 Consolidation 기능
    • 일관성 있는 더 빠른 노드 구동시간을 통해 시간/비용 낭비 최소화
  • 동작

출처: https://www.youtube.com/watch?v=yMOaOlPvrgY&t=969

  • Consolidation

출처: https://youtu.be/yMOaOlPvrgY?si=8VDzi4HsXzIWvNP6&t=1017

  • Consolidation 동작 방식

출처: https://youtu.be/yMOaOlPvrgY?si=y5BWUHrrALB0YQoR&t=1065

  • 중단 비용 - 예) 파드가 많이 배치되어서 재스케줄링 영향이 큰 경우 등 , 오래 기동된 노드 TTL, 비용 관련
  • 관리 간소화

출처: https://youtu.be/yMOaOlPvrgY?si=qUwzpa6r_fTAtHe_&t=1262

 

☞ [영상] Karpenter로 쿠버네티스 클러스터 최적화: 비용 절감과 효율성 향상* - Youtube , 원본영상*

  • [영상] [CNKCD2024] 유연한 클라우드 운영을 위한 Karpenter 의 내부 메커니즘과 사례 분석 (강인호)

 

  • [영상] [CNKCD2024] 쿠버네티스 스케줄러는 노드를 어떻게 선택하는가? (임찬식)

Getting Started with Karpenter 실습 - Docs

 

1. Install utilities

  1. AWS CLI : 자격증명 설정
  2. kubectl - the Kubernetes CLI
  3. eksctl (>= v0.202.0) - the CLI for AWS EKS
  4. helm - the package manager for Kubernetes
  5. eks-node-view

2. Set environment variables

# 변수 설정
export KARPENTER_NAMESPACE="kube-system"
export KARPENTER_VERSION="1.2.1"
export K8S_VERSION="1.32"

export AWS_PARTITION="aws" # if you are not using standard partitions, you may need to configure to aws-cn / aws-us-gov
export CLUSTER_NAME="gasida-karpenter-demo" # ${USER}-karpenter-demo
export AWS_DEFAULT_REGION="ap-northeast-2"
export AWS_ACCOUNT_ID="$(aws sts get-caller-identity --query Account --output text)"
export TEMPOUT="$(mktemp)"
export ALIAS_VERSION="$(aws ssm get-parameter --name "/aws/service/eks/optimized-ami/${K8S_VERSION}/amazon-linux-2023/x86_64/standard/recommended/image_id" --query Parameter.Value | xargs aws ec2 describe-images --query 'Images[0].Name' --image-ids | sed -r 's/^.*(v[[:digit:]]+).*$/\1/')"

# 확인
echo "${KARPENTER_NAMESPACE}" "${KARPENTER_VERSION}" "${K8S_VERSION}" "${CLUSTER_NAME}" "${AWS_DEFAULT_REGION}" "${AWS_ACCOUNT_ID}" "${TEMPOUT}" "${ALIAS_VERSION}"

 

3. Create a Cluster

  • Use CloudFormation to set up the infrastructure needed by the EKS cluster. See CloudFormation for a complete description of what cloudformation.yaml does for Karpenter.
  • Create a Kubernetes service account and AWS IAM Role, and associate them using IRSA to let Karpenter launch instances.
  • Add the Karpenter node role to the aws-auth configmap to allow nodes to connect.
  • Use AWS EKS managed node groups for the kube-system and karpenter namespaces. Uncomment fargateProfiles settings (and comment out managedNodeGroups settings) to use Fargate for both namespaces instead.
  • Set KARPENTER_IAM_ROLE_ARN variables.
  • Create a role to allow spot instances.
  • Run Helm to install Karpenter
# CloudFormation 스택으로 IAM Policy/Role, SQS, Event/Rule 생성 : 3분 정도 소요
## IAM Policy : KarpenterControllerPolicy-gasida-karpenter-demo
## IAM Role : KarpenterNodeRole-gasida-karpenter-demo
curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/cloudformation.yaml  > "${TEMPOUT}" \
&& aws cloudformation deploy \
  --stack-name "Karpenter-${CLUSTER_NAME}" \
  --template-file "${TEMPOUT}" \
  --capabilities CAPABILITY_NAMED_IAM \
  --parameter-overrides "ClusterName=${CLUSTER_NAME}"


# 클러스터 생성 : EKS 클러스터 생성 15분 정도 소요
eksctl create cluster -f - <<EOF
---
apiVersion: eksctl.io/v1alpha5
kind: ClusterConfig
metadata:
  name: ${CLUSTER_NAME}
  region: ${AWS_DEFAULT_REGION}
  version: "${K8S_VERSION}"
  tags:
    karpenter.sh/discovery: ${CLUSTER_NAME}

iam:
  withOIDC: true
  podIdentityAssociations:
  - namespace: "${KARPENTER_NAMESPACE}"
    serviceAccountName: karpenter
    roleName: ${CLUSTER_NAME}-karpenter
    permissionPolicyARNs:
    - arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:policy/KarpenterControllerPolicy-${CLUSTER_NAME}

iamIdentityMappings:
- arn: "arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/KarpenterNodeRole-${CLUSTER_NAME}"
  username: system:node:{{EC2PrivateDNSName}}
  groups:
  - system:bootstrappers
  - system:nodes
  ## If you intend to run Windows workloads, the kube-proxy group should be specified.
  # For more information, see https://github.com/aws/karpenter/issues/5099.
  # - eks:kube-proxy-windows

managedNodeGroups:
- instanceType: m5.large
  amiFamily: AmazonLinux2023
  name: ${CLUSTER_NAME}-ng
  desiredCapacity: 2
  minSize: 1
  maxSize: 10
  iam:
    withAddonPolicies:
      externalDNS: true

addons:
- name: eks-pod-identity-agent
EOF


# eks 배포 확인
eksctl get cluster
eksctl get nodegroup --cluster $CLUSTER_NAME
eksctl get iamidentitymapping --cluster $CLUSTER_NAME
eksctl get iamserviceaccount --cluster $CLUSTER_NAME
eksctl get addon --cluster $CLUSTER_NAME

# 
kubectl ctx
kubectl config rename-context "<각자 자신의 IAM User>@<자신의 Nickname>-karpenter-demo.ap-northeast-2.eksctl.io" "karpenter-demo"
kubectl config rename-context "eks.user@gasida-karpenter-demo.ap-northeast-2.eksctl.io" "karpenter-demo"

# k8s 확인
kubectl ns default
kubectl cluster-info
kubectl get node --label-columns=node.kubernetes.io/instance-type,eks.amazonaws.com/capacityType,topology.kubernetes.io/zone
kubectl get pod -n kube-system -owide
kubectl get pdb -A
kubectl describe cm -n kube-system aws-auth

# EC2 Spot Fleet의 service-linked-role 생성 확인 : 만들어있는것을 확인하는 거라 아래 에러 출력이 정상!
# If the role has already been successfully created, you will see:
# An error occurred (InvalidInput) when calling the CreateServiceLinkedRole operation: Service role name AWSServiceRoleForEC2Spot has been taken in this account, please try a different suffix.
aws iam create-service-linked-role --aws-service-name spot.amazonaws.com || true

  • AWS 웹 관리 콘솔 : EKS → Access , Add-ons 확인

 

  • 실습 동작 확인을 위한 도구 설치 : kube-ops-view , (옵션) ExternalDNS
# kube-ops-view
helm repo add geek-cookbook https://geek-cookbook.github.io/charts/
helm install kube-ops-view geek-cookbook/kube-ops-view --version 1.2.2 --set service.main.type=LoadBalancer --set env.TZ="Asia/Seoul" --namespace kube-system
echo -e "http://$(kubectl get svc -n kube-system kube-ops-view -o jsonpath="{.status.loadBalancer.ingress[0].hostname}"):8080/#scale=1.5"

#kubectl annotate service kube-ops-view -n kube-system "external-dns.alpha.kubernetes.io/hostname=kubeopsview.$MyDomain"
#echo -e "Kube Ops View URL = http://kubeopsview.$MyDomain:8080/#scale=1.5"
#open "http://kubeopsview.$MyDomain:8080/#scale=1.5"

# (옵션) ExternalDNS
#MyDomain=<자신의 도메인>
#MyDomain=gasida.link
#MyDnzHostedZoneId=$(aws route53 list-hosted-zones-by-name --dns-name "${MyDomain}." --query "HostedZones[0].Id" --output text)
#echo $MyDomain, $MyDnzHostedZoneId
#curl -s https://raw.githubusercontent.com/gasida/PKOS/main/aews/externaldns.yaml | MyDomain=$MyDomain MyDnzHostedZoneId=$MyDnzHostedZoneId envsubst | kubectl apply -f -

 

4. Install Karpenter

# Logout of helm registry to perform an unauthenticated pull against the public ECR
helm registry logout public.ecr.aws

# Karpenter 설치를 위한 변수 설정 및 확인
export CLUSTER_ENDPOINT="$(aws eks describe-cluster --name "${CLUSTER_NAME}" --query "cluster.endpoint" --output text)"
export KARPENTER_IAM_ROLE_ARN="arn:${AWS_PARTITION}:iam::${AWS_ACCOUNT_ID}:role/${CLUSTER_NAME}-karpenter"
echo "${CLUSTER_ENDPOINT} ${KARPENTER_IAM_ROLE_ARN}"

# karpenter 설치
helm upgrade --install karpenter oci://public.ecr.aws/karpenter/karpenter --version "${KARPENTER_VERSION}" --namespace "${KARPENTER_NAMESPACE}" --create-namespace \
  --set "settings.clusterName=${CLUSTER_NAME}" \
  --set "settings.interruptionQueue=${CLUSTER_NAME}" \
  --set controller.resources.requests.cpu=1 \
  --set controller.resources.requests.memory=1Gi \
  --set controller.resources.limits.cpu=1 \
  --set controller.resources.limits.memory=1Gi \
  --wait

# 확인
helm list -n kube-system
kubectl get-all -n $KARPENTER_NAMESPACE
kubectl get all -n $KARPENTER_NAMESPACE
kubectl get crd | grep karpenter
ec2nodeclasses.karpenter.k8s.aws             2025-03-08T11:38:10Z
nodeclaims.karpenter.sh                      2025-03-08T11:38:10Z
nodepools.karpenter.sh                       2025-03-08T11:38:10Z

  • Karpenter는 ClusterFirst기본적으로 포드 DNS 정책을 사용합니다. Karpenter가 DNS 서비스 포드의 용량을 관리해야 하는 경우 Karpenter가 시작될 때 DNS가 실행되지 않음을 의미합니다. 이 경우 포드 DNS 정책을 Defaultwith 로 설정해야 합니다 --set dnsPolicy=Default. 이렇게 하면 Karpenter가 내부 DNS 확인 대신 호스트의 DNS 확인을 사용하도록 하여 실행할 DNS 서비스 포드에 대한 종속성이 없도록 합니다.
  • Karpenter는 노드 용량 추적을 위해 클러스터의 CloudProvider 머신과 CustomResources 간의 매핑을 만듭니다. 이 매핑이 일관되도록 하기 위해 Karpenter는 다음 태그 키를 활용합니다.
    • karpenter.sh/managed-by
    • karpenter.sh/nodepool
    • kubernetes.io/cluster/${CLUSTER_NAME}

 

5. 프로메테우스 / 그라파나 설치 - Docs

#
helm repo add grafana-charts https://grafana.github.io/helm-charts
helm repo add prometheus-community https://prometheus-community.github.io/helm-charts
helm repo update
kubectl create namespace monitoring

# 프로메테우스 설치
curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/prometheus-values.yaml | envsubst | tee prometheus-values.yaml
helm install --namespace monitoring prometheus prometheus-community/prometheus --values prometheus-values.yaml
extraScrapeConfigs: |
    - job_name: karpenter
      kubernetes_sd_configs:
      - role: endpoints
        namespaces:
          names:
          - kube-system
      relabel_configs:
      - source_labels:
        - __meta_kubernetes_endpoints_name
        - __meta_kubernetes_endpoint_port_name
        action: keep
        regex: karpenter;http-metrics

# 프로메테우스 얼럿매니저 미사용으로 삭제
kubectl delete sts -n monitoring prometheus-alertmanager

# 프로메테우스 접속 설정
export POD_NAME=$(kubectl get pods --namespace monitoring -l "app.kubernetes.io/name=prometheus,app.kubernetes.io/instance=prometheus" -o jsonpath="{.items[0].metadata.name}")
kubectl --namespace monitoring port-forward $POD_NAME 9090 &
open http://127.0.0.1:9090

# 그라파나 설치
curl -fsSL https://raw.githubusercontent.com/aws/karpenter-provider-aws/v"${KARPENTER_VERSION}"/website/content/en/preview/getting-started/getting-started-with-karpenter/grafana-values.yaml | tee grafana-values.yaml
helm install --namespace monitoring grafana grafana-charts/grafana --values grafana-values.yaml
datasources:
  datasources.yaml:
    apiVersion: 1
    datasources:
    - name: Prometheus
      type: prometheus
      version: 1
      url: http://prometheus-server:80
      access: proxy
dashboardProviders:
  dashboardproviders.yaml:
    apiVersion: 1
    providers:
    - name: 'default'
      orgId: 1
      folder: ''
      type: file
      disableDeletion: false
      editable: true
      options:
        path: /var/lib/grafana/dashboards/default
dashboards:
  default:
    capacity-dashboard:
      url: https://karpenter.sh/preview/getting-started/getting-started-with-karpenter/karpenter-capacity-dashboard.json
    performance-dashboard:
      url: https://karpenter.sh/preview/getting-started/getting-started-with-karpenter/karpenter-performance-dashboard.json

# admin 암호
kubectl get secret --namespace monitoring grafana -o jsonpath="{.data.admin-password}" | base64 --decode ; echo
jnLJNBaWM3N7RTn7GvlBkFpAYtN9eDqf5zJZAqcK

# 그라파나 접속
kubectl port-forward --namespace monitoring svc/grafana 3000:80 &
open http://127.0.0.1:3000

 

 

6. Create NodePool (구 Provisioner) - Workshop , Docs , NodeClaims

  • 관리 리소스securityGroupSelector and subnetSelector로 찾음
  • consolidationPolicy : 미사용 노드 정리 정책, 데몬셋 제외
  • 단일 Karpenter NodePool은 여러 다른 포드 모양을 처리할 수 있습니다. Karpenter는 레이블 및 친화성과 같은 포드 속성을 기반으로 스케줄링 및 프로비저닝 결정을 내립니다. 즉, Karpenter는 여러 다른 노드 그룹을 관리할 필요성을 제거합니다.
  • 아래 명령을 사용하여 기본 NodePool을 만듭니다. 이 NodePool은 노드를 시작하는 데 사용되는 리소스를 검색하기 위해 securityGroupSelectorTerms및 를 사용합니다. 위 명령 에서 subnetSelectorTerms태그를 적용했습니다 . 이러한 리소스가 클러스터 간에 공유되는 방식에 따라 다른 태그 지정 체계를 사용해야 할 수 있습니다.karpenter.sh/discovery
  • consolidationPolicy은 Karpenter가 노드를 제거하고 교체하여 클러스터 비용을 줄이도록 구성합니다. 결과적으로 통합은 클러스터의 모든 빈 노드를 종료합니다. 이 동작은 로 설정하여 Karpenter에게 노드를 통합해서는 안 된다고 말함으로써 비활성화할 수 있습니다 . 자세한 내용은 NodePool API 문서를 검토하세요.WhenEmptyOrUnderutilizeddisruptionconsolidateAfterNever
  • 참고: 이 NodePool은 생성된 모든 용량의 합계가 지정된 한도보다 작은 한도 내에서 용량을 생성합니다.
#
echo $ALIAS_VERSION
v20250228

#
cat <<EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      requirements:
        - key: kubernetes.io/arch
          operator: In
          values: ["amd64"]
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m", "r"]
        - key: karpenter.k8s.aws/instance-generation
          operator: Gt
          values: ["2"]
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      expireAfter: 720h # 30 * 24h = 720h
  limits:
    cpu: 1000
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized
    consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  role: "KarpenterNodeRole-${CLUSTER_NAME}" # replace with your cluster name
  amiSelectorTerms:
    - alias: "al2023@${ALIAS_VERSION}" # ex) al2023@latest
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
EOF

# 확인 
kubectl get nodepool,ec2nodeclass,nodeclaims

  • Karpenter가 이제 활성화되었으며 노드 프로비저닝을 시작할 준비가 되었습니다.

7. Scale up deployment : This deployment uses the pause image and starts with zero replicas.

  • Scale up deployment

 

# pause 파드 1개에 CPU 1개 최소 보장 할당할 수 있게 디플로이먼트 배포
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 0
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
      - name: inflate
        image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
        resources:
          requests:
            cpu: 1
        securityContext:
          allowPrivilegeEscalation: false
EOF

# [신규 터미널] 모니터링
eks-node-viewer --resources cpu,memory
eks-node-viewer --resources cpu,memory --node-selector "karpenter.sh/registered=true" --extra-labels eks-node-viewer/node-age


# Scale up
kubectl get pod
kubectl scale deployment inflate --replicas 5

# 출력 로그 분석해보자!
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
kubectl logs -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | grep 'launched nodeclaim' | jq '.'
{
  "level": "INFO",
  "time": "2025-03-08T13:42:09.138Z",
  "logger": "controller",
  "message": "launched nodeclaim",
  "commit": "058c665",
  "controller": "nodeclaim.lifecycle",
  "controllerGroup": "karpenter.sh",
  "controllerKind": "NodeClaim",
  "NodeClaim": {
    "name": "default-jrr74"
  },
  "namespace": "",
  "name": "default-jrr74",
  "reconcileID": "8b845ee3-f5d0-4b8d-adfe-eeb4c80fd86e",
  "provider-id": "aws:///ap-northeast-2c/i-09d58eff62e5dafdb",
  "instance-type": "c5a.2xlarge",
  "zone": "ap-northeast-2c",
  "capacity-type": "on-demand",
  "allocatable": {
    "cpu": "7910m",
    "ephemeral-storage": "17Gi",
    "memory": "14162Mi",
    "pods": "58",
    "vpc.amazonaws.com/pod-eni": "38"
  }

# 확인
kubectl get nodeclaims
NAME            TYPE          CAPACITY    ZONE              NODE                                                 READY   AGE
default-jrr74   c5a.2xlarge   on-demand   ap-northeast-2c   ip-192-168-180-11.ap-northeast-2.compute.internal   True    7m26s
kubectl describe nodeclaims
...
Spec:
  Expire After:  720h
  Node Class Ref:
    Group:  karpenter.k8s.aws
    Kind:   EC2NodeClass
    Name:   default
  Requirements:
    Key:       kubernetes.io/os
    Operator:  In
    Values:
      linux
    Key:       node.kubernetes.io/instance-type
    Operator:  In
    Values:
      c4.2xlarge
      c4.4xlarge
      c5.2xlarge
      c5.4xlarge
      c5a.2xlarge
      c5a.4xlarge
      c5a.8xlarge
      c5d.2xlarge
      c5d.4xlarge
      c5n.2xlarge
      ...
    ...
    Key:       karpenter.sh/capacity-type
    Operator:  In
    Values:
      on-demand
  Resources:
    Requests:
      Cpu:   4150m
      Pods:  8
    Key:       karpenter.sh/capacity-type
    Operator:  In
    Values:
      on-demand
    Key:       karpenter.k8s.aws/instance-category
    Operator:  In
    Values:
      c
      m
      r
...

#
kubectl get node -l karpenter.sh/registered=true -o jsonpath="{.items[0].metadata.labels}" | jq '.'
...
  "karpenter.sh/capacity-type": "on-demand",
  "karpenter.sh/initialized": "true",
  "karpenter.sh/nodepool": "default",
  "karpenter.sh/registered": "true",
...

# (옵션) 더욱 더 Scale up!
kubectl scale deployment inflate --replicas 30

  • CreateFleet 이벤트 확인 - Link

 

  • (참고) 카펜터로 배포한 노드 tag 정보 참고

  • (참고) 프로메테우스 메트릭 : karpenter_YYY

 

vcpulimitexceeded 로 더 높은 사양의 서버가 할당되지 않은 것으로 보임

8. Scale Down deployment

# Now, delete the deployment. After a short amount of time, Karpenter should terminate the empty nodes due to consolidation.
kubectl delete deployment inflate && date
# 출력 로그 분석해보자!
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
...
{
  "level": "INFO",
  "time": "2025-03-02T06:53:28.780Z",
  "logger": "controller",
  "message": "disrupting nodeclaim(s) via delete, terminating 1 nodes (1 pods) ip-192-168-131-97.ap-northeast-2.compute.internal/c5a.large/on-demand",
  "commit": "058c665",
  "controller": "disruption",
  "namespace": "",
  "name": "",
  "reconcileID": "86a3a45c-2604-4a71-808a-21290301d096",
  "command-id": "51914aee-4e09-436f-af6d-794163c3d1c2",
  "reason": "underutilized"
}
{
  "level": "INFO",
  "time": "2025-03-02T06:53:29.532Z",
  "logger": "controller",
  "message": "tainted node",
  "commit": "058c665",
  "controller": "node.termination",
  "controllerGroup": "",
  "controllerKind": "Node",
  "Node": {
    "name": "ip-192-168-131-97.ap-northeast-2.compute.internal"
  },
  "namespace": "",
  "name": "ip-192-168-131-97.ap-northeast-2.compute.internal",
  "reconcileID": "617bcb4d-5498-44d9-ba1e-6c8b7d97c405",
  "taint.Key": "karpenter.sh/disrupted",
  "taint.Value": "",
  "taint.Effect": "NoSchedule"
}
{
  "level": "INFO",
  "time": "2025-03-02T06:54:03.234Z",
  "logger": "controller",
  "message": "deleted node",
  "commit": "058c665",
  "controller": "node.termination",
  "controllerGroup": "",
  "controllerKind": "Node",
  "Node": {
    "name": "ip-192-168-131-97.ap-northeast-2.compute.internal"
  },
  "namespace": "",
  "name": "ip-192-168-131-97.ap-northeast-2.compute.internal",
  "reconcileID": "8c71fb19-b7ae-4037-afef-fbf1c7343f84"
}
{
  "level": "INFO",
  "time": "2025-03-02T06:54:03.488Z",
  "logger": "controller",
  "message": "deleted nodeclaim",
  "commit": "058c665",
  "controller": "nodeclaim.lifecycle",
  "controllerGroup": "karpenter.sh",
  "controllerKind": "NodeClaim",
  "NodeClaim": {
    "name": "default-mfkgp"
  },
  "namespace": "",
  "name": "default-mfkgp",
  "reconcileID": "757b4d88-2bf2-412c-bf83-3149f9517d85",
  "Node": {
    "name": "ip-192-168-131-97.ap-northeast-2.compute.internal"
  },
  "provider-id": "aws:///ap-northeast-2a/i-00f22c8bde3faf646"
}
{
  "level": "INFO",
  "time": "2025-03-02T07:25:55.661Z",
  "logger": "controller",
  "message": "disrupting nodeclaim(s) via delete, terminating 1 nodes (0 pods) ip-192-168-176-171.ap-northeast-2.compute.internal/c5a.2xlarge/on-demand",
  "commit": "058c665",
  "controller": "disruption",
  "namespace": "",
  "name": "",
  "reconcileID": "0942417e-7ecb-437a-85db-adc553ccade9",
  "command-id": "b2b7c689-91ca-43c5-ac1c-2052bf7418c1",
  "reason": "empty"
}
{
  "level": "INFO",
  "time": "2025-03-02T07:25:56.783Z",
  "logger": "controller",
  "message": "tainted node",
  "commit": "058c665",
  "controller": "node.termination",
  "controllerGroup": "",
  "controllerKind": "Node",
  "Node": {
    "name": "ip-192-168-176-171.ap-northeast-2.compute.internal"
  },
  "namespace": "",
  "name": "ip-192-168-176-171.ap-northeast-2.compute.internal",
  "reconcileID": "6254e6be-2445-4402-b829-0bb75fa540e0",
  "taint.Key": "karpenter.sh/disrupted",
  "taint.Value": "",
  "taint.Effect": "NoSchedule"
}
{
  "level": "INFO",
  "time": "2025-03-02T07:26:49.195Z",
  "logger": "controller",
  "message": "deleted node",
  "commit": "058c665",
  "controller": "node.termination",
  "controllerGroup": "",
  "controllerKind": "Node",
  "Node": {
    "name": "ip-192-168-176-171.ap-northeast-2.compute.internal"
  },
  "namespace": "",
  "name": "ip-192-168-176-171.ap-northeast-2.compute.internal",
  "reconcileID": "6c126a63-8bfa-4828-8ef6-5d22b8c1e7cc"
}

#
kubectl get nodeclaims

 

  • TerminateInstances 최근 1시간 이벤트 확인 - Link

 

Disruption (구 Consolidation) : Expiration , Drift , Consolidation - Workshop , Docs , Spot-to-Spot

출처: https://aws.amazon.com/ko/blogs/compute/applying-spot-to-spot-consolidation-best-practices-with-karpenter/

  • Expiration 만료 : 기본 720시간(30일) 후 인스턴스를 자동으로 만료하여 강제로 노드를 최신 상태로 유지
  • Drift 드리프트 : 구성 변경 사항(NodePool, EC2NodeClass)를 감지하여 필요한 변경 사항을 적용
  • Consolidation 통합 : 비용 효율적인 컴퓨팅 최적화 선택
    • A critical feature for operating compute in a cost-effective manner, Karpenter will optimize our cluster's compute on an on-going basis. For example, if workloads are running on under-utilized compute instances, it will consolidate them to fewer instances.
  • 스팟 인스턴스 시작 시 Karpenter는 AWS EC2 Fleet Instance API를 호출하여 NodePool 구성 기반으로 선택한 인스턴스 유형을 전달.
  • AWS EC2 Fleet Instance API는 시작된 인스턴스 목록과 시작할 수 없는 인스턴스 목록을 즉시 반환하는 API로, 시작할 수 없을 경우 Karpenter는 대체 용량을 요청하거나 워크로드에 대한 soft 일정 제약 조건을 제거할 수 있음

출처: https://aws.amazon.com/ko/blogs/compute/applying-spot-to-spot-consolidation-best-practices-with-karpenter/

  • Spot-to-Spot Consolidation 에는 주문형 통합과 다른 접근 방식이 필요했습니다. 온디맨드 통합의 경우 규모 조정 및 최저 가격이 주요 지표로 사용됩니다.
  • 스팟 간 통합이 이루어지려면 Karpenter에는 최소 15개의 인스턴스 유형이 포함된 다양한 인스턴스 구성(연습에 정의된 NodePool 예제 참조)이 필요합니다. 이러한 제약 조건이 없으면 Karpenter가 가용성이 낮고 중단 빈도가 높은 인스턴스를 선택할 위험이 있습니다.
# 기존 nodepool 삭제
kubectl delete nodepool,ec2nodeclass default

# 모니터링
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
eks-node-viewer --resources cpu,memory --node-selector "karpenter.sh/registered=true" --extra-labels eks-node-viewer/node-age
watch -d "kubectl get nodes -L karpenter.sh/nodepool -L node.kubernetes.io/instance-type -L karpenter.sh/capacity-type"

# Create a Karpenter NodePool and EC2NodeClass
cat <<EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      requirements:
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["on-demand"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m", "r"]
        - key: karpenter.k8s.aws/instance-size
          operator: NotIn
          values: ["nano","micro","small","medium"]
        - key: karpenter.k8s.aws/instance-hypervisor
          operator: In
          values: ["nitro"]
      expireAfter: 1h # nodes are terminated automatically after 1 hour
  limits:
    cpu: "1000"
    memory: 1000Gi
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized # policy enables Karpenter to replace nodes when they are either empty or underutilized
    consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  role: "KarpenterNodeRole-${CLUSTER_NAME}" # replace with your cluster name
  amiSelectorTerms:
    - alias: "al2023@latest"
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
EOF

# 확인 
kubectl get nodepool,ec2nodeclass

# Deploy a sample workload
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 5
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
      - name: inflate
        image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
        resources:
          requests:
            cpu: 1
            memory: 1.5Gi
        securityContext:
          allowPrivilegeEscalation: false
EOF


#
kubectl get nodes -L karpenter.sh/nodepool -L node.kubernetes.io/instance-type -L karpenter.sh/capacity-type
kubectl get nodeclaims
kubectl describe nodeclaims
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
kubectl logs -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | grep 'launched nodeclaim' | jq '.'


# Scale the inflate workload from 5 to 12 replicas, triggering Karpenter to provision additional capacity
kubectl scale deployment/inflate --replicas 12

# This changes the total memory request for this deployment to around 12Gi, 
# which when adjusted to account for the roughly 600Mi reserved for the kubelet on each node means that this will fit on 2 instances of type m5.large:
kubectl get nodeclaims


# Scale down the workload back down to 5 replicas
kubectl scale deployment/inflate --replicas 5
kubectl get nodeclaims
NAME            TYPE          CAPACITY    ZONE              NODE                                                 READY   AGE
default-ffnzp   c6g.2xlarge   on-demand   ap-northeast-2c   ip-192-168-185-240.ap-northeast-2.compute.internal   True    14m


# We can check the Karpenter logs to get an idea of what actions it took in response to our scaling in the deployment. Wait about 5-10 seconds before running the following command:
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
{
  "level": "INFO",
  "time": "2025-03-02T08:19:13.969Z",
  "logger": "controller",
  "message": "disrupting nodeclaim(s) via delete, terminating 1 nodes (5 pods) ip-192-168-132-48.ap-northeast-2.compute.internal/c6g.2xlarge/on-demand",
  "commit": "058c665",
  "controller": "disruption",
  "namespace": "",
  "name": "",
  "reconcileID": "a900df38-7189-42aa-a3b3-9fcaf944dcf4",
  "command-id": "4b7ef3a5-6962-48a9-bd38-c9898580bb75",
  "reason": "underutilized"
}


# Karpenter can also further consolidate if a node can be replaced with a cheaper variant in response to workload changes. 
# This can be demonstrated by scaling the inflate deployment replicas down to 1, with a total memory request of around 1Gi:
kubectl scale deployment/inflate --replicas 1

kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
{
  "level": "INFO",
  "time": "2025-03-02T08:23:59.683Z",
  "logger": "controller",
  "message": "disrupting nodeclaim(s) via replace, terminating 1 nodes (1 pods) ip-192-168-185-240.ap-northeast-2.compute.internal/c6g.2xlarge/on-demand and replacing with on-demand node from types c6g.large, c7g.large, c5a.large, c6gd.large, m6g.large and 55 other(s)",
  "commit": "058c665",
  "controller": "disruption",
  "namespace": "",
  "name": "",
  "reconcileID": "6669c544-e065-4c97-b594-ec1fb68b68b5",
  "command-id": "b115c17f-3e29-48bc-8da8-d7073f189624",
  "reason": "underutilized"
}

kubectl get nodeclaims
NAME            TYPE          CAPACITY    ZONE              NODE                                                 READY   AGE
default-ff7xn   c6g.large     on-demand   ap-northeast-2b   ip-192-168-109-5.ap-northeast-2.compute.internal     True    78s
default-ffnzp   c6g.2xlarge   on-demand   ap-northeast-2c   ip-192-168-185-240.ap-northeast-2.compute.internal   True    16m

kubectl get nodeclaims                                                                                
NAME            TYPE        CAPACITY    ZONE              NODE                                               READY   AGE
default-ff7xn   c6g.large   on-demand   ap-northeast-2b   ip-192-168-109-5.ap-northeast-2.compute.internal   True    3m3s


# 삭제
kubectl delete deployment inflate
kubectl delete nodepool,ec2nodeclass default

 

 

☞ (추가 실습) Spot-to-Spot Consolidation 실습 해보기

# v0.34.0 부터 featureGates 에 spotToSpotConsolidation 활성화로 사용 가능
#helm upgrade karpenter -n kube-system oci://public.ecr.aws/karpenter/karpenter --reuse-values --set settings.featureGates.spotToSpotConsolidation=true

# 모니터링
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
eks-node-viewer --resources cpu,memory --node-selector "karpenter.sh/registered=true"

# Create a Karpenter NodePool and EC2NodeClass
cat <<EOF | envsubst | kubectl apply -f -
apiVersion: karpenter.sh/v1
kind: NodePool
metadata:
  name: default
spec:
  template:
    spec:
      nodeClassRef:
        group: karpenter.k8s.aws
        kind: EC2NodeClass
        name: default
      requirements:
        - key: kubernetes.io/os
          operator: In
          values: ["linux"]
        - key: karpenter.sh/capacity-type
          operator: In
          values: ["spot"]
        - key: karpenter.k8s.aws/instance-category
          operator: In
          values: ["c", "m", "r"]
        - key: karpenter.k8s.aws/instance-size
          operator: NotIn
          values: ["nano","micro","small","medium"]
        - key: karpenter.k8s.aws/instance-hypervisor
          operator: In
          values: ["nitro"]
      expireAfter: 1h # nodes are terminated automatically after 1 hour
  limits:
    cpu: "1000"
    memory: 1000Gi
  disruption:
    consolidationPolicy: WhenEmptyOrUnderutilized # policy enables Karpenter to replace nodes when they are either empty or underutilized
    consolidateAfter: 1m
---
apiVersion: karpenter.k8s.aws/v1
kind: EC2NodeClass
metadata:
  name: default
spec:
  role: "KarpenterNodeRole-${CLUSTER_NAME}" # replace with your cluster name
  amiSelectorTerms:
    - alias: "bottlerocket@latest"
  subnetSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
  securityGroupSelectorTerms:
    - tags:
        karpenter.sh/discovery: "${CLUSTER_NAME}" # replace with your cluster name
EOF

# 확인 
kubectl get nodepool,ec2nodeclass

# Deploy a sample workload
cat <<EOF | kubectl apply -f -
apiVersion: apps/v1
kind: Deployment
metadata:
  name: inflate
spec:
  replicas: 5
  selector:
    matchLabels:
      app: inflate
  template:
    metadata:
      labels:
        app: inflate
    spec:
      terminationGracePeriodSeconds: 0
      securityContext:
        runAsUser: 1000
        runAsGroup: 3000
        fsGroup: 2000
      containers:
      - name: inflate
        image: public.ecr.aws/eks-distro/kubernetes/pause:3.7
        resources:
          requests:
            cpu: 1
            memory: 1.5Gi
        securityContext:
          allowPrivilegeEscalation: false
EOF

#
kubectl get nodes -L karpenter.sh/nodepool -L node.kubernetes.io/instance-type -L karpenter.sh/capacity-type
kubectl get nodeclaims
kubectl describe nodeclaims
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
kubectl logs -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | grep 'launched nodeclaim' | jq '.'

# Scale the inflate workload from 5 to 12 replicas, triggering Karpenter to provision additional capacity
kubectl scale deployment/inflate --replicas 12

# This changes the total memory request for this deployment to around 12Gi, 
# which when adjusted to account for the roughly 600Mi reserved for the kubelet on each node means that this will fit on 2 instances of type m5.large:
kubectl get nodeclaims

# Scale down the workload back down to 5 replicas
kubectl scale deployment/inflate --replicas 5
kubectl get nodeclaims

# We can check the Karpenter logs to get an idea of what actions it took in response to our scaling in the deployment. Wait about 5-10 seconds before running the following command:
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'

# Karpenter can also further consolidate if a node can be replaced with a cheaper variant in response to workload changes. 
# This can be demonstrated by scaling the inflate deployment replicas down to 1, with a total memory request of around 1Gi:
kubectl scale deployment/inflate --replicas 1
kubectl logs -f -n "${KARPENTER_NAMESPACE}" -l app.kubernetes.io/name=karpenter -c controller | jq '.'
kubectl get nodeclaims

# 삭제
kubectl delete deployment inflate
kubectl delete nodepool,ec2nodeclass default

c7i-flex.2xlarge node 가 생성됨
c7i-flex.2xlarge node 가 1개 더 생성되어 2개가 됨

  • ec2 는 limit 에 걸려 scaleup 되지 못했지만 spot은 대수가 증가 감소하였음
  • kapenter 에 대해서는 좀 더 학습을 해야겠네요. 

 

  • 실습 리소스 삭제 - Docs
# Karpenter helm 삭제 
helm uninstall karpenter --namespace "${KARPENTER_NAMESPACE}"

# Karpenter IAM Role 등 생성한 CloudFormation 삭제
aws cloudformation delete-stack --stack-name "Karpenter-${CLUSTER_NAME}"

# EC2 Launch Template 삭제
aws ec2 describe-launch-templates --filters "Name=tag:karpenter.k8s.aws/cluster,Values=${CLUSTER_NAME}" |
    jq -r ".LaunchTemplates[].LaunchTemplateName" |
    xargs -I{} aws ec2 delete-launch-template --launch-template-name {}

# 클러스터 삭제
eksctl delete cluster --name "${CLUSTER_NAME}"
  • 클러스터 삭제 이후에도, Karpenter IAM Role 생성한 CloudFormation 삭제가 잘 안될 경우 AWS CloudFormation 관리 콘솔에서 직접 삭제!