티스토리 뷰
CloudNet@ 가시다님이 진행하는 KANS Study 3기 스터디 내용 참고.
실습 환경 구성
구성 : VPC 1개(퍼블릭 서브넷 2개), EC2 인스턴스 4대 (Ubuntu 22.04 LTS, t3.medium - vCPU 2 , Mem 4)
# YAML 파일 다운로드
curl -O https://s3.ap-northeast-2.amazonaws.com/cloudformation.cloudneta.net/kans/kans-6w.yaml
# CloudFormation 스택 배포
# aws cloudformation deploy --template-file kans-1w.yaml --stack-name mylab --parameter-overrides KeyName=<My SSH Keyname> SgIngressSshCidr=<My Home Public IP Address>/32 --region ap-northeast-2
예시) aws cloudformation deploy --template-file kans-6w.yaml --stack-name mylab --parameter-overrides KeyName=kp-gasida SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2
## Tip. 인스턴스 타입 변경 : MyInstanceType=t3.xlarge (vCPU 4, Mem 16)
예시) aws cloudformation deploy --template-file kans-6w.yaml --stack-name mylab --parameter-overrides MyInstanceType=t3.xlarge KeyName=kp-gasida SgIngressSshCidr=$(curl -s ipinfo.io/ip)/32 --region ap-northeast-2
# CloudFormation 스택 배포 완료 후 작업용 EC2 IP 출력
aws cloudformation describe-stacks --stack-name mylab --query 'Stacks[*].Outputs[0].OutputValue' --output text --region ap-northeast-2
# [모니터링] CloudFormation 스택 상태 : 생성 완료 확인
while true; do
date
AWS_PAGER="" aws cloudformation list-stacks \
--stack-status-filter CREATE_IN_PROGRESS CREATE_COMPLETE CREATE_FAILED DELETE_IN_PROGRESS DELETE_FAILED \
--query "StackSummaries[*].{StackName:StackName, StackStatus:StackStatus}" \
--output table
sleep 1
done
# EC2 SSH 접속 : 바로 접속하지 말고, 3~5분 정도 후에 접속 할 것
ssh -i ~/.ssh/kp-gasida.pem ubuntu@$(aws cloudformation describe-stacks --stack-name mylab --query 'Stacks[*].Outputs[0].OutputValue' --output text --region ap-northeast-2)
...
(⎈|default:N/A) root@k3s-s:~# <- kubeps 가 나오지 않을 경우 ssh logout 후 다시 ssh 접속 할 것!
- k3s : Lightweight Kubernetes. Easy to install, half the memory, all in a binary of less than 100 MB - Docs
- rancher 회사에서 IoT 및 edge computing 디바이스 위에서도 동작할 수 있도록 만들어진 경량 k8s 입니다.
- 장점 : 설치가 쉽다, 가볍다(etcd, cloud manager 등 무거운 컴포넌트 제거), 학습용 및 테스트 시 필요한 기능들은 대부분 탑재
- What is K3s : K3s is a fully compliant Kubernetes distribution with the following enhancements
- 단일 바이너리 또는 최소 컨테이너 이미지로 배포됩니다.
- 기본 저장소 백엔드로 sqlite3를 사용하는 경량 데이터 저장소입니다. etcd3, MySQL, Postgres도 사용할 수 있습니다.
- TLS 및 옵션의 복잡성을 처리하는 간단한 실행 프로그램으로 감싸져 있습니다.
- 기본적으로 보안이 설정되어 있으며 경량 환경에 적합한 합리적인 기본값을 제공합니다.
- 모든 Kubernetes 컨트롤 플레인 구성 요소의 운영이 단일 바이너리와 프로세스에 캡슐화되어 있어 K3s가 인증서 배포와 같은 복잡한 클러스터 운영을 자동화하고 관리할 수 있습니다.
- 외부 종속성이 최소화되었으며, 요구 사항은 최신 커널과 cgroup 마운트뿐입니다.
- 간편한 "batteries-included(필요한 패키지 포함)" 클러스터 생성을 위해 필요한 종속성을 포함하고 있습니다:
- containerd / cri-dockerd 컨테이너 런타임 (CRI)
- Flannel 컨테이너 네트워크 인터페이스 (CNI)
- CoreDNS 클러스터 DNS
- Traefik 인그레스 컨트롤러
- ServiceLB 로드 밸런서 컨트롤러
- Kube-router 네트워크 정책 컨트롤러
- Local-path-provisioner 퍼시스턴트 볼륨 컨트롤러
- Spegel 분산 컨테이너 이미지 레지스트리 미러
- 호스트 유틸리티 (iptables, socat 등)
- k3s 기본 정보 확인 : k8s v1.30.x → 현재 ingress-nginx controller 이 1.30.x 까지 버전 호환 테스트 완료 임 (‘24.10.6 일 기준)
# Install k3s-server
curl -sfL https://get.k3s.io | INSTALL_K3S_EXEC=" --disable=traefik" sh -s - server --token kanstoken --cluster-cidr "172.16.0.0/16" --service-cidr "10.10.200.0/24" --write-kubeconfig-mode 644
# Install k3s-agent
curl -sfL https://get.k3s.io | K3S_URL=https://192.168.10.10:6443 K3S_TOKEN=kanstoken sh -s -
- k3s 는 경량화를 위해서 k8s 와 다름
# 노드 확인
kubectl get node -owide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k3s-s Ready control-plane,master 23m v1.30.5+k3s1 192.168.10.10 <none> Ubuntu 22.04.5 LTS 6.8.0-1015-aws containerd://1.7.21-k3s2
k3s-w1 Ready <none> 23m v1.30.5+k3s1 192.168.10.101 <none> Ubuntu 22.04.5 LTS 6.8.0-1015-aws containerd://1.7.21-k3s2
k3s-w2 Ready <none> 23m v1.30.5+k3s1 192.168.10.102 <none> Ubuntu 22.04.5 LTS 6.8.0-1015-aws containerd://1.7.21-k3s2
k3s-w3 Ready <none> 23m v1.30.5+k3s1 192.168.10.103 <none> Ubuntu 22.04.5 LTS 6.8.0-1015-aws containerd://1.7.21-k3s2
# kubecolor alias 로 kc 설정 되어 있음
kc describe node k3s-s # Taints 없음
kc describe node k3s-w1
# 파드 확인
kubectl get pod -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-7b98449c4-jmhgk 1/1 Running 0 21m
local-path-provisioner-6795b5f9d8-w6h8s 1/1 Running 0 21m
metrics-server-cdcc87586-m4ndt 1/1 Running 0 21m
#
kubectl top node
kubectl top pod -A --sort-by='cpu'
kubectl top pod -A --sort-by='memory'
kubectl get storageclass
# config 정보(위치) 확인
kubectl get pod -v=6
I1006 13:04:02.858105 4325 loader.go:395] Config loaded from file: /etc/rancher/k3s/k3s.yaml
I1006 13:04:02.872677 4325 round_trippers.go:553] GET https://127.0.0.1:6443/api/v1/namespaces/default/pods?limit=500 200 OK in 5 milliseconds
No resources found in default namespace.
cat /etc/rancher/k3s/k3s.yaml
export | grep KUBECONFIG
# 네트워크 정보 확인 : flannel CNI(vxlan mode), podCIDR
ip -c addr
ip -c route
cat /run/flannel/subnet.env
kubectl get nodes -o jsonpath='{.items[*].spec.podCIDR}' ;echo
kubectl describe node | grep -A3 Annotations
brctl show
# 서비스와 엔드포인트 확인
kubectl get svc,ep -A
# iptables 정보 확인
iptables -t filter -S
iptables -t nat -S
iptables -t mangle -S
# tcp listen 포트 정보 확인
ss -tnlp
0. Ingress 요약
인그레스(Ingress) 를 통한 통신 흐름
Nginx 인그레스 컨트롤러 경우 : 외부에서 인그레스로 접속 시 Nginx 인그레스 컨트롤러 파드로 인입되고, 이후 애플리케이션 파드의 IP로 직접 통신
클러스터 내부를 외부에 노출 - 발전 단계
1. 파드 생성 : K8S 클러스터 내부에서만 접속
2. 서비스(Cluster Type) 연결 : K8S 클러스터 내부에서만 접속
- 동일한 애플리케이션의 다수의 파드의 접속을 용이하게 하기 위한 서비스에 접속
3. 서비스(NodePort Type) 연결 : 외부 클라이언트가 서비스를 통해서 클러스터 내부의 파드로 접속
- 서비스(NodePort Type)의 일부 단점을 보완한 서비스(LoadBalancer Type) 도 있습니다!
4. 인그레스 컨트롤러 파드를 배치 : 서비스 앞단에 HTTP 고급 라우팅 등 기능 동작을 위한 배치
- 인그레스(정책)이 적용된 인그레스 컨트롤러 파드(예. nginx pod)를 앞단에 배치하여 고급 라우팅 등 기능을 제공
5. 인그레스 컨트롤러 파드 이중화 구성 : Active(Leader) - Standby(Follower) 로 Active 파드 장애에 대비
6. 인그레스 컨트롤러 파드를 외부에 노출 : 인그레스 컨트롤러 파드를 외부에서 접속하기 위해서 노출(expose)
- 인그레스 컨트롤러 노출 시 서비스(NodePort Type) 보다는 좀 더 많은 기능을 제공하는 서비스(LoadBalancer Type)를 권장합니다 (80/443 포트 오픈 시)
7. 인그레스와 파드간 내부 연결의 효율화 방안 : 인그레스 컨트롤러 파드(Layer7 동작)에서 서비스 파드의 IP로 직접 연결
- 인그레스 컨트롤러 파드는 K8S API서버로부터 서비스의 엔드포인트 정보(파드 IP)를 획득 후 바로 파드의 IP로 연결 - 링크
- 지원되는 인그레스 컨트롤러 : Nginx, Traefix 등 현재 대부분의 인그레스 컨트롤러가 지원함
1. 인그레스(Ingress) 소개
인그레스 소개 : 클러스터 내부의 서비스(ClusterIP, NodePort, Loadbalancer)를 외부로 노출(HTTP/HTTPS) - Web Proxy 역할
- Make your HTTP (or HTTPS) network service available using a protocol-aware configuration mechanism, that understands web concepts like URIs, hostnames, paths, and more. The Ingress concept lets you map traffic to different backends based on rules you define via the Kubernetes API.
- An API object that manages external access to the services in a cluster, typically HTTP.
- Ingress may provide load balancing, SSL termination and name-based virtual hosting.
- Ingress is frozen. New features are being added to the Gateway API.
참고: 김태민 기술 블로그 - 링크
Ingress 비교 - 링크
2. Nginx 인그레스 컨트롤러 설치
인그레스(Ingress) 소개 : 클러스터 내부의 HTTP/HTTPS 서비스를 외부로 노출(expose) - 링크
인그레스 컨트롤러 : 인그레스의 실제 동작 구현은 인그레스 컨트롤러(Nginx, Kong 등)가 처리 - 링크
- 쿠버네티스는 Ingress API 만 정의하고 실제 구현은 add-on 에 맡김
- Ingress-Nginx Controller - 링크 ⇒ 간편한 테스트를 위해서 NodePort 타입(externalTrafficPolicy: Local) 설정
- 다양한 Nginx 인그레스 컨트롤러 인입 방법
- MetalLB 사용, Via the host network 사용, Using a self-provisioned edge 사용, External IPs 사용 - 링크
Ingress NGINX Controller for Kubernetes - Home , Github
- ingress-nginx is an Ingress controller for Kubernetes using NGINX as a reverse proxy and load balancer.
- How it works : configmap 설정을 nginx config 에 적용(by lua), 변경 시 (최소) reload - Docs
- The goal of this Ingress controller is the assembly of a configuration file (nginx.conf).
- The main implication of this requirement is the need to reload NGINX after any change in the configuration file.
- Though it is important to note that we don't reload Nginx on changes that impact only an upstream configuration (i.e Endpoints change when you deploy your app).
- We use lua-nginx-module to achieve this. Check below to learn more about how it's done.
- When a reload is required
- New Ingress Resource Created.
- TLS section is added to existing Ingress.
- Change in Ingress annotations that impacts more than just upstream configuration. For instance load-balance annotation does not require a reload.
- A path is added/removed from an Ingress.
- An Ingress, Service, Secret is removed.
- Some missing referenced object from the Ingress is available, like a Service or Secret.
- A Secret is updated.
- Ingress-Nginx 컨트롤러 생성 - ArtifactHub release
# Ingress-Nginx 컨트롤러 생성
cat <<EOT> ingress-nginx-values.yaml
controller:
service:
type: NodePort
nodePorts:
http: 30080
https: 30443
nodeSelector:
kubernetes.io/hostname: "k3s-s"
metrics:
enabled: true
serviceMonitor:
enabled: true
EOT
helm repo add ingress-nginx https://kubernetes.github.io/ingress-nginx
helm repo update
kubectl create ns ingress
helm install ingress-nginx ingress-nginx/ingress-nginx -f ingress-nginx-values.yaml --namespace ingress --version 4.11.2
# 확인
kubectl get all -n ingress
kc describe svc -n ingress ingress-nginx-controller
# externalTrafficPolicy 설정
kubectl patch svc -n ingress ingress-nginx-controller -p '{"spec":{"externalTrafficPolicy": "Local"}}'
# 기본 nginx conf 파일 확인
kc describe cm -n ingress ingress-nginx-controller
kubectl exec deploy/ingress-nginx-controller -n ingress -it -- cat /etc/nginx/nginx.conf
# 관련된 정보 확인 : 포드(Nginx 서버), 서비스, 디플로이먼트, 리플리카셋, 컨피그맵, 롤, 클러스터롤, 서비스 어카운트 등
kubectl get all,sa,cm,secret,roles -n ingress
kc describe clusterroles ingress-nginx
kubectl get pod,svc,ep -n ingress -o wide -l app.kubernetes.io/component=controller
# 버전 정보 확인
POD_NAMESPACE=ingress
POD_NAME=$(kubectl get pods -n $POD_NAMESPACE -l app.kubernetes.io/name=ingress-nginx --field-selector=status.phase=Running -o name)
kubectl exec $POD_NAME -n $POD_NAMESPACE -- /nginx-ingress-controller --version
3. 인그레스(Ingress) 실습 및 통신 흐름 확인
실습 구성도
- 컨트롤플레인 노드에 인그레스 컨트롤러(Nginx) 파드를 생성, NodePort 로 외부에 노출
- 인그레스 정책 설정 : Host/Path routing, 실습의 편리를 위해서 도메인 없이 IP로 접속 설정 가능
3.1 디플로이먼트와 서비스를 생성
- svc1-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy1-websrv
spec:
replicas: 1
selector:
matchLabels:
app: websrv
template:
metadata:
labels:
app: websrv
spec:
containers:
- name: pod-web
image: nginx
---
apiVersion: v1
kind: Service
metadata:
name: svc1-web
spec:
ports:
- name: web-port
port: 9001
targetPort: 80
selector:
app: websrv
type: ClusterIP
- svc2-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy2-guestsrv
spec:
replicas: 2
selector:
matchLabels:
app: guestsrv
template:
metadata:
labels:
app: guestsrv
spec:
containers:
- name: pod-guest
image: gcr.io/google-samples/kubernetes-bootcamp:v1
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc2-guest
spec:
ports:
- name: guest-port
port: 9002
targetPort: 8080
selector:
app: guestsrv
type: NodePort
- svc3-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploy3-adminsrv
spec:
replicas: 3
selector:
matchLabels:
app: adminsrv
template:
metadata:
labels:
app: adminsrv
spec:
containers:
- name: pod-admin
image: k8s.gcr.io/echoserver:1.5
ports:
- containerPort: 8080
---
apiVersion: v1
kind: Service
metadata:
name: svc3-admin
spec:
ports:
- name: admin-port
port: 9003
targetPort: 8080
selector:
app: adminsrv
- 생성 및 확인
# 모니터링
watch -d 'kubectl get ingress,svc,ep,pod -owide'
# 생성
kubectl taint nodes k3s-s role=controlplane:NoSchedule
curl -s -O https://raw.githubusercontent.com/gasida/NDKS/main/7/svc1-pod.yaml
curl -s -O https://raw.githubusercontent.com/gasida/NDKS/main/7/svc2-pod.yaml
curl -s -O https://raw.githubusercontent.com/gasida/NDKS/main/7/svc3-pod.yaml
kubectl apply -f svc1-pod.yaml,svc2-pod.yaml,svc3-pod.yaml
# 확인 : svc1, svc3 은 ClusterIP 로 클러스터 외부에서는 접속할 수 없다 >> Ingress 는 연결 가능!
kubectl get pod,svc,ep
3.2 인그레스(정책) 생성 - 링크
- ingress1.yaml
cat <<EOT> ingress1.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-1
annotations:
#nginx.ingress.kubernetes.io/upstream-hash-by: "true"
spec:
ingressClassName: nginx
rules:
- http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc1-web
port:
number: 80
- path: /guest
pathType: Prefix
backend:
service:
name: svc2-guest
port:
number: 8080
- path: /admin
pathType: Prefix
backend:
service:
name: svc3-admin
port:
number: 8080
EOT
- 인그레스 생성 및 확인
# 모니터링
watch -d 'kubectl get ingress,svc,ep,pod -owide'
# 생성
kubectl apply -f ingress1.yaml
# 확인
kubectl get ingress
kc describe ingress ingress-1
...
Rules:
Host Path Backends
---- ---- --------
*
/ svc1-web:80 ()
/guest svc2-guest:8080 ()
/admin svc3-admin:8080 ()
...
# 설정이 반영된 nginx conf 파일 확인
kubectl exec deploy/ingress-nginx-controller -n ingress -it -- cat /etc/nginx/nginx.conf
kubectl exec deploy/ingress-nginx-controller -n ingress -it -- cat /etc/nginx/nginx.conf | grep 'location /' -A5
location /guest/ {
set $namespace "default";
set $ingress_name "ingress-1";
set $service_name "svc2-guest";
set $service_port "8080";
--
location /admin/ {
set $namespace "default";
set $ingress_name "ingress-1";
set $service_name "svc3-admin";
set $service_port "8080";
--
location / {
set $namespace "default";
set $ingress_name "ingress-1";
set $service_name "svc1-web";
set $service_port "80";
--
...
3.3 인그레스를 통한 내부 접속
- Nginx 인그레스 컨트롤러를 통한 접속(HTTP 인입) 경로 : 인그레스 컨트롤러 파드에서 서비스 파드의 IP로 직접 연결 (아래 두번째 그림)
- 인그레스(Nginx 인그레스 컨트롤러)를 통한 접속(HTTP 인입) 확인*** : HTTP 부하분산 & PATH 기반 라우팅, 애플리케이션 파드에 연결된 서비스는 Bypass
# (krew 플러그인 설치 시) 인그레스 정책 확인
kubectl ingress-nginx ingresses
INGRESS NAME HOST+PATH ADDRESSES TLS SERVICE SERVICE PORT ENDPOINTS
ingress-1 / 192.168.10.10 NO svc1-web 80 1
ingress-1 /guest 192.168.10.10 NO svc2-guest 8080 2
ingress-1 /admin 192.168.10.10 NO svc3-admin 8080 3
#
kubectl get ingress
NAME CLASS HOSTS ADDRESS PORTS AGE
ingress-1 nginx * 10.10.200.24 80 3m44s
kubectl describe ingress ingress-1 | sed -n "5, \$p"
Rules:
Host Path Backends
---- ---- --------
* / svc1-web:80 ()
/guest svc2-guest:8080 ()
/admin svc3-admin:8080 ()
# 접속 로그 확인 : kubetail 설치되어 있음 - 출력되는 nginx 의 로그의 IP 확인
kubetail -n ingress -l app.kubernetes.io/component=controller
-------------------------------
# 자신의 집 PC에서 인그레스를 통한 접속 : 각각
echo -e "Ingress1 sv1-web URL = http://$(curl -s ipinfo.io/ip):30080"
echo -e "Ingress1 sv2-guest URL = http://$(curl -s ipinfo.io/ip):30080/guest"
echo -e "Ingress1 sv3-admin URL = http://$(curl -s ipinfo.io/ip):30080/admin"
# svc1-web 접속
MYIP=<EC2 공인 IP>
MYIP=13.124.93.150
curl -s $MYIP:30080
# svvc2-guest 접속
curl -s $MYIP:30080/guest
curl -s $MYIP:30080/guest
for i in {1..100}; do curl -s $MYIP:30080/guest ; done | sort | uniq -c | sort -nr
# svc3-admin 접속 > 기본적으로 Nginx 는 라운드로빈 부하분산 알고리즘을 사용 >> Client_address 와 XFF 주소는 어떤 주소인가요?
curl -s $MYIP:30080/admin
curl -s $MYIP:30080/admin | egrep '(client_address|x-forwarded-for)'
for i in {1..100}; do curl -s $MYIP:30080/admin | grep Hostname ; done | sort | uniq -c | sort -nr
# (옵션) 디플로이먼트의 파드 갯수를 증가/감소 설정 후 접속 테스트 해보자
3.4 패킷 분석
- 외부에서 접속(그림 왼쪽) 후 Nginx 파드(Layer7 동작)는 HTTP 헤더에 정보 추가(XFF)후 파드의 IP로 직접 전달
3.5 (참고) AWS Ingress (ALB) 모드
- 인스턴스 모드 : AWS ALB(Ingress)로 인입 후 각 워커노드의 NodePort 로 전달 후 IPtables 룰(SEP)에 따라 파드로 분배
- IP 모드 : nginx ingress controller 동작과 유사하게 AWS LoadBalancer Controller 파드가 kube api 를 통해서 파드의 IP를 제공받아서 AWS ALB 에 타켓(파드 IP)를 설정
3.6 Host 기반 라우팅
- ingress2.yaml
cat <<EOT> ingress2.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-2
spec:
ingressClassName: nginx
rules:
- host: kans.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc3-admin
port:
number: 8080
- host: "*.kans.com"
http:
paths:
- path: /echo
pathType: Prefix
backend:
service:
name: svc3-admin
port:
number: 8080
EOT
- 인그레스 생성 및 확인
# 터미널1
watch -d 'kubectl get ingresses,svc,ep,pod -owide'
# 도메인 변경
MYDOMAIN1=<각자 자신의 닉네임의 도메인> 예시) gasida.com
MYDOMAIN1=gasida.com
sed -i "s/kans.com/$MYDOMAIN1/g" ingress2.yaml
# 생성
kubectl apply -f ingress2.yaml,svc3-pod.yaml
# 확인
kubectl get ingress
kubectl describe ingress ingress-2
kubectl describe ingress ingress-2 | sed -n "5, \$p"
Rules:
Host Path Backends
---- ---- --------
kans.com / svc3-admin:8080 ()
*.kans.com /echo svc3-admin:8080 ()
...
- 인그레스(Nginx 인그레스 컨트롤러)를 통한 접속(HTTP 인입) 확인
# 로그 모니터링
kubetail -n ingress -l app.kubernetes.io/component=controller
# (옵션) ingress nginx 파드 vethY 에서 패킷 캡처 후 확인 해 볼 것
------------
# 자신의 PC 에서 접속 테스트
# svc3-admin 접속 > 결과 확인 : 왜 접속이 되지 않는가? HTTP 헤더에 Host 필드를 잘 확인해보자!
curl $MYIP:30080 -v
curl $MYIP:30080/echo -v
# mypc에서 접속을 위한 설정
## /etc/hosts 수정 : 도메인 이름으로 접속하기 위해서 변수 지정
## 윈도우 C:\Windows\System32\drivers\etc\hosts
## 맥 sudo vim /etc/hosts
MYDOMAIN1=<각자 자신의 닉네임의 도메인>
MYDOMAIN2=<test.각자 자신의 닉네임의 도메인>
MYDOMAIN1=kans.com
MYDOMAIN2=test.kans.com
echo $MYIP $MYDOMAIN1 $MYDOMAIN2
echo "$MYIP $MYDOMAIN1" | sudo tee -a /etc/hosts
echo "$MYIP $MYDOMAIN2" | sudo tee -a /etc/hosts
cat /etc/hosts | grep $MYDOMAIN1
# svc3-admin 접속 > 결과 확인
curl $MYDOMAIN1:30080 -v
curl $MYDOMAIN1:30080/admin
curl $MYDOMAIN1:30080/echo
curl $MYDOMAIN1:30080/echo/1
curl $MYDOMAIN2:30080 -v
curl $MYDOMAIN2:30080/admin
curl $MYDOMAIN2:30080/echo
curl $MYDOMAIN2:30080/echo/1
curl $MYDOMAIN2:30080/echo/1/2
## (옵션) /etc/hosts 파일 변경 없이 접속 방안
curl -H "host: $MYDOMAIN1" $MYIP:30080
- 오브젝트 삭제 kubectl delete deployments,svc,ingress --all
3.7 카나리 업그레이드
- 배포 자동화 지원(최소 중단, 무중단) - 롤링 업데이트, 카나리 업데이트, 블루/그린 업데이트 - 링크 링크2 하이커퍼넥스-블로그
- 롤링 업데이트
- 카나리 업데이트
- 블루/그린 업데이트
- canary-svc1-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dp-v1
spec:
replicas: 3
selector:
matchLabels:
app: svc-v1
template:
metadata:
labels:
app: svc-v1
spec:
containers:
- name: pod-v1
image: k8s.gcr.io/echoserver:1.5
ports:
- containerPort: 8080
terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Service
metadata:
name: svc-v1
spec:
ports:
- name: web-port
port: 9001
targetPort: 8080
selector:
app: svc-v1
- canary-svc2-pod.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: dp-v2
spec:
replicas: 3
selector:
matchLabels:
app: svc-v2
template:
metadata:
labels:
app: svc-v2
spec:
containers:
- name: pod-v2
image: k8s.gcr.io/echoserver:1.6
ports:
- containerPort: 8080
terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Service
metadata:
name: svc-v2
spec:
ports:
- name: web-port
port: 9001
targetPort: 8080
selector:
app: svc-v2
- 생성 및 확인
# 터미널1
watch -d 'kubectl get ingress,svc,ep,pod -owide'
# 생성
curl -s -O https://raw.githubusercontent.com/gasida/NDKS/main/7/canary-svc1-pod.yaml
curl -s -O https://raw.githubusercontent.com/gasida/NDKS/main/7/canary-svc2-pod.yaml
kubectl apply -f canary-svc1-pod.yaml,canary-svc2-pod.yaml
# 확인
kubectl get svc,ep,pod
# 파드 버전 확인: 1.13.0 vs 1.13.1
for pod in $(kubectl get pod -o wide -l app=svc-v1 |awk 'NR>1 {print $6}'); do curl -s $pod:8080 | egrep '(Hostname|nginx)'; done
Hostname: dp-v1-cdd8dc687-gcgsz
server_version=nginx: 1.13.0 - lua: 10008
for pod in $(kubectl get pod -o wide -l app=svc-v2 |awk 'NR>1 {print $6}'); do curl -s $pod:8080 | egrep '(Hostname|nginx)'; done
Hostname: dp-v2-785f69bd6-hh624
server_version=nginx: 1.13.1 - lua: 10008
- canary-ingress1.yaml
cat <<EOT> canary-ingress1.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-canary-v1
spec:
ingressClassName: nginx
rules:
- host: kans.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-v1
port:
number: 8080
EOT
- canary-ingress2.yaml
cat <<EOT> canary-ingress2.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: ingress-canary-v2
annotations:
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "10"
spec:
ingressClassName: nginx
rules:
- host: kans.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-v2
port:
number: 8080
EOT
- 카나리 업그레이드 확인
# 터미널1
watch -d 'kubectl get ingress,svc,ep'
# 도메인 변경
MYDOMAIN1=<각자 자신의 닉네임의 도메인> 예시) gasida.com
sed -i "s/kans.com/$MYDOMAIN1/g" canary-ingress1.yaml
sed -i "s/kans.com/$MYDOMAIN1/g" canary-ingress2.yaml
# 생성
kubectl apply -f canary-ingress1.yaml,canary-ingress2.yaml
# 로그 모니터링
kubetail -n ingress -l app.kubernetes.io/component=controller
# 접속 테스트
curl -s $MYDOMAIN1:30080
curl -s $MYDOMAIN1:30080 | grep nginx
# 접속 시 v1 v2 버전별 비율이 어떻게 되나요? 왜 이렇게 되나요?
for i in {1..100}; do curl -s $MYDOMAIN1:30080 | grep nginx ; done | sort | uniq -c | sort -nr
for i in {1..1000}; do curl -s $MYDOMAIN1:30080 | grep nginx ; done | sort | uniq -c | sort -nr
while true; do curl -s --connect-timeout 1 $MYDOMAIN1:30080 | grep Hostname ; echo "--------------" ; date "+%Y-%m-%d %H:%M:%S" ; sleep 1; done
# 비율 조정 >> 개발 배포 버전 전략에 유용하다!
kubectl annotate --overwrite ingress ingress-canary-v2 nginx.ingress.kubernetes.io/canary-weight=50
# 접속 테스트
for i in {1..100}; do curl -s $MYDOMAIN1:30080 | grep nginx ; done | sort | uniq -c | sort -nr
for i in {1..1000}; do curl -s $MYDOMAIN1:30080 | grep nginx ; done | sort | uniq -c | sort -nr
# (옵션) 비율 조정 << 어떻게 비율이 조정될까요?
kubectl annotate --overwrite ingress ingress-canary-v2 nginx.ingress.kubernetes.io/canary-weight=100
for i in {1..100}; do curl -s $MYDOMAIN1:30080 | grep nginx ; done | sort | uniq -c | sort -nr
# (옵션) 비율 조정 << 어떻게 비율이 조정될까요?
kubectl annotate --overwrite ingress ingress-canary-v2 nginx.ingress.kubernetes.io/canary-weight=0
for i in {1..100}; do curl -s $MYDOMAIN1:30080 | grep nginx ; done | sort | uniq -c | sort -nr
- 오브젝트 삭제 kubectl delete deployments,svc,ingress --all
3.8 HTTPS 처리 (TLS 종료) - 링크
- svc-pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: pod-https
labels:
app: https
spec:
containers:
- name: container
image: k8s.gcr.io/echoserver:1.6
terminationGracePeriodSeconds: 0
---
apiVersion: v1
kind: Service
metadata:
name: svc-https
spec:
selector:
app: https
ports:
- port: 8080
- ssl-termination-ingress.yaml
cat <<EOT> ssl-termination-ingress.yaml
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: https
spec:
ingressClassName: nginx
tls:
- hosts:
- kans.com
secretName: secret-https
rules:
- host: kans.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: svc-https
port:
number: 8080
EOT
- 생성 확인 및 secret 생성 후 접속 확인
# 서비스와 파드 생성
curl -s -O https://raw.githubusercontent.com/gasida/NDKS/main/7/svc-pod.yaml
kubectl apply -f svc-pod.yaml
# 도메인 변경
MYDOMAIN1=<각자 자신의 닉네임의 도메인> 예시) gasida.com
MYDOMAIN1=kans.com
echo $MYDOMAIN1
sed -i "s/kans.com/$MYDOMAIN1/g" ssl-termination-ingress.yaml
# 인그레스 생성
kubectl apply -f ssl-termination-ingress.yaml
# 인증서 생성
# openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=dkos.com/O=dkos.com"mkdir key && cd key
MYDOMAIN1=kans.com
openssl req -x509 -nodes -days 365 -newkey rsa:2048 -keyout tls.key -out tls.crt -subj "/CN=$MYDOMAIN1/O=$MYDOMAIN1"
tree
# Secret 생성
kubectl create secret tls secret-https --key tls.key --cert tls.crt
# Secret 확인
kubectl get secrets secret-https
kubectl get secrets secret-https -o yaml
-------------------
# 자신의 PC 에서 접속 확인 : PC 웹브라우저
# 접속 확인 : -k 는 https 접속 시 : 접속 포트 정보 확인
curl -Lk https://$MYDOMAIN1:30443
## (옵션) /etc/hosts 파일 변경 없이 접속 방안
curl -Lk -H "host: $MYDOMAIN1" https://$MYDOMAIN1:30443
- Nginx SSL Termination 패킷 확인 : 중간 172.16.29.11 이 nginx controller
# 패킷 캡처 명령어 참고
export IngHttp=$(kubectl get service -n ingress-nginx ingress-nginx-controller -o jsonpath='{.spec.ports[0].nodePort}')
export IngHttps=$(kubectl get service -n ingress-nginx ingress-nginx-controller -o jsonpath='{.spec.ports[1].nodePort}')
tcpdump -i <nginx 파드 veth> -nnq tcp port 80 or tcp port 443 or tcp port 8080 or tcp port $IngHttp or tcp port $IngHttps
tcpdump -i <nginx 파드 veth> -nn tcp port 80 or tcp port 443 or tcp port 8080 or tcp port $IngHttp or tcp port $IngHttps -w /tmp/ingress.pcap
- 오브젝트 삭제 : kubectl delete pod,svc,ingress --all
- Nginx 인그레스 컨트롤러 삭제 : helm uninstall -n ingress ingress-nginx
실습 환경 삭제 : 모든 실습 완료 후에는 꼭 삭제 확인
# CloudFormation 스택 삭제
aws cloudformation delete-stack --stack-name mylab
# [모니터링] CloudFormation 스택 상태 : 삭제 확인
while true; do
date
AWS_PAGER="" aws cloudformation list-stacks \
--stack-status-filter CREATE_IN_PROGRESS CREATE_COMPLETE CREATE_FAILED DELETE_IN_PROGRESS DELETE_FAILED \
--query "StackSummaries[*].{StackName:StackName, StackStatus:StackStatus}" \
--output table
sleep 1
done
/etc/hosts 에 추가한 내용 삭제
4. Gateway API 소개
Gateway API의 주요 기능
- 개선된 리소스 모델: API는 GatewayClass, Gateway 및 Route(HTTPRoute, TCPRoute 등)와 같은 새로운 사용자 정의 리소스를 도입하여 라우팅 규칙을 정의하는 보다 세부적이고 표현력 있는 방법을 제공합니다.
- 프로토콜 독립적: 주로 HTTP용으로 설계된 Ingress와 달리 Gateway API는 TCP, UDP, TLS를 포함한 여러 프로토콜을 지원합니다.
- 강화된 보안: TLS 구성 및 보다 세부적인 액세스 제어에 대한 기본 제공 지원.
- 교차 네임스페이스 지원: 서로 다른 네임스페이스의 서비스로 트래픽을 라우팅하여 보다 유연한 아키텍처를 구축할 수 있는 기능을 제공합니다.
- 확장성: API는 사용자 정의 리소스 및 정책으로 쉽게 확장할 수 있도록 설계되었습니다.
- 역할 지향: 클러스터 운영자, 애플리케이션 개발자, 보안 팀 간의 우려를 명확하게 분리합니다.
Gateway API 소개 : 기존의 Ingress 에 좀 더 기능을 추가, 역할 분리(role-oriented) - Docs
- 서비스 메시(istio)에서 제공하는 Rich 한 기능 중 일부 기능들과 혹은 운영 관리에 필요한 기능들을 추가
- 추가 기능 : 헤더 기반 라우팅, 헤더 변조, 트래픽 미러링(쉽게 트래픽 복제), 역할 기반
- Gateway API is an add-on containing API kinds that provide dynamic infrastructure provisioning and advanced traffic routing.
- Make network services available by using an extensible, role-oriented, protocol-aware configuration mechanism.
- Gateway API is an add-on containing API kinds that provide dynamic infrastructure provisioning and advanced traffic routing.
구성 요소 (Resource)
- GatewayClass,Gateway, HTTPRoute, TCPRoute, Service
- GatewayClass: Defines a set of gateways with common configuration and managed by a controller that implements the class.
- Gateway: Defines an instance of traffic handling infrastructure, such as cloud load balancer.
- HTTPRoute: Defines HTTP-specific rules for mapping traffic from a Gateway listener to a representation of backend network endpoints. These endpoints are often represented as a Service.
- Kubernetes Traffic Management: Combining Gateway API with Service Mesh for North-South and East-West Use Cases - Blog
- Request flow
- Why does a role-oriented API matter?
- 담당 업무의 역할에 따라서 동작/권한을 유연하게 제공할 수 있음
- 아래 그림 처럼 '스토어 개발자'는 Store 네임스페이스내에서 해당 store PATH 라우팅 관련 정책을 스스로 관리 할 수 있음
- Infrastructure Provider: Manages infrastructure that allows multiple isolated clusters to serve multiple tenants, e.g. a cloud provider.
- Cluster Operator: Manages clusters and is typically concerned with policies, network access, application permissions, etc.
- Application Developer: Manages an application running in a cluster and is typically concerned with application-level configuration and Service composition.
5. Gloo Gateway
- 참고 링크 - Gloo Blog , Docs
- Gloo Gateway Architecture : These components work together to translate Gloo and Kubernetes Gateway API custom resources into Envoy configuration
- The config and secret watcher components in the gloo pod watch the cluster for new Kubernetes Gateway API and Gloo Gateway resources, such as Gateways, HTTPRoutes, or RouteOptions.
- When the config or secret watcher detect new or updated resources, it sends the resource configuration to the Gloo Gateway translation engine.
- The translation engine translates Kubernetes Gateway API and Gloo Gateway resources into Envoy configuration. All Envoy configuration is consolidated into an xDS snapshot.
- The reporter receives a status report for every resource that is processed by the translator.
- The reporter writes the resource status back to the etcd data store.
- The xDS snapshot is provided to the Gloo Gateway xDS server component in the gloo pod.
- Gateway proxies in the cluster pull the latest Envoy configuration from the Gloo Gateway xDS server.
- Users send a request to the IP address or hostname that the gateway proxy is exposed on.
- The gateway proxy uses the listener and route-specific configuration that was provided in the xDS snapshot to perform routing decisions and forward requests to destinations in the cluster.
- Translation engine
- The translation cycle starts by defining Envoy clusters from all configured Upstream and Kubernetes service resources. Clusters in this context are groups of similar hosts. Each Upstream has a type that determines how the Upstream is processed. Correctly configured Upstreams and Kubernetes services are converted into Envoy clusters that match their type, including information like cluster metadata.
- The next step in the translation cycle is to process all the functions on each Upstream. Function-specific cluster metadata is added and is later processed by function-specific Envoy filters.
- In the next step, all Envoy routes are generated. Routes are generated for each route rule that is defined on the HTTPRoute and RouteOption resources. When all of the routes are created, the translator processes any VirtualHostOption, ListenerOption, and HttpListenerOption resources, aggregates them into Envoy virtual hosts, and adds them to a new Envoy HTTP Connection Manager configuration.
- Filter plug-ins are queried for their filter configurations, generating the list of HTTP and TCP Filters that are added to the Envoy listeners.
- Finally, an xDS snapshot is composed of the all the valid endpoints (EDS), clusters (CDS), route configs (RDS), and listeners (LDS). The snapshot is sent to the Gloo Gateway xDS server. Gateway proxies in your cluster watch the xDS server for new config. When new config is detected, the config is pulled into the gateway proxy.
- Deployment patterns - Docs
1. Simple ingress
2. Shared gateway
3. Sharded gateway with central ingress
- 기존 설정에 따라 중앙 인그레스 엔드포인트로 다른 유형의 프록시를 사용하고 싶을 수 있습니다.
- 예를 들어 모든 트래픽이 통과해야 하는 HAProxy 또는 AWS NLB/ALB 인스턴스가 있을 수 있습니다
4. API gateway for a service mesh
- [Tutorial] Hands-On with the Kubernetes Gateway API and Envoy Proxy : 양이 상당히 매우 많음 😅 - Blog Github
- Kubernetes-hosted application accessible via a gateway configured with policies for routing, service discovery, timeouts, debugging, access logging, and observability
Install
Install KinD Cluster
#
cat <<EOT> kind-1node.yaml
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- containerPort: 30000
hostPort: 30000
- containerPort: 30001
hostPort: 30001
- containerPort: 30002
hostPort: 30002
EOT
# Install KinD Cluster
kind create cluster --image kindest/node:v1.30.0 --config kind-1node.yaml --name myk8s
# 노드에 기본 툴 설치
docker exec -it myk8s-control-plane sh -c 'apt update && apt install tree psmisc lsof wget bsdmainutils bridge-utils net-tools dnsutils tcpdump ngrep iputils-ping git vim -y'
# 노드/파드 확인
kubectl get nodes -o wide
kubectl get pod -A
Install Gateway API CRDs : The Kubernetes Gateway API abstractions are expressed using Kubernetes CRDs.
# CRDs 설치 및 확인
kubectl apply -f https://github.com/kubernetes-sigs/gateway-api/releases/download/v1.0.0/standard-install.yaml
kubectl get crd
Install Glooctl Utility : GLOOCTL is a command-line utility that allows users to view, manage, and debug Gloo Gateway deployments - Link
# [신규 터미널] 아래 bash 진입 후 glooctl 툴 사용
docker exec -it myk8s-control-plane bash
----------------------------------------
# Install Glooctl Utility
## glooctl install gateway # install gloo's function gateway functionality into the 'gloo-system' namespace
## glooctl install ingress # install very basic Kubernetes Ingress support with Gloo into namespace gloo-system
## glooctl install knative # install Knative serving with Gloo configured as the default cluster ingress
## curl -sL https://run.solo.io/gloo/install | sh
curl -sL https://run.solo.io/gloo/install | GLOO_VERSION=v1.17.7 sh
export PATH=$HOME/.gloo/bin:$PATH
# 버전 확인
glooctl version
----------------------------------------
Install Gloo Gateway : 오픈소스 버전
# [신규 터미널] 모니터링
watch -d kubectl get pod,svc,endpointslices,ep -n gloo-system
# Install Gloo Gateway
## --set kubeGateway.enabled=true: Kubernetes Gateway 기능을 활성화합니다.
## --set gloo.disableLeaderElection=true: Gloo의 리더 선출 기능을 비활성화합니다. (단일 인스턴스에서 Gloo를 실행 시 유용)
## --set discovery.enabled=false: 서비스 디스커버리 기능을 비활성화합니다.
helm repo add gloo https://storage.googleapis.com/solo-public-helm
helm repo update
helm install -n gloo-system gloo-gateway gloo/gloo \
--create-namespace \
--version 1.17.7 \
--set kubeGateway.enabled=true \
--set gloo.disableLeaderElection=true \
--set discovery.enabled=false
# Confirm that the Gloo control plane has successfully been deployed using this command
kubectl rollout status deployment/gloo -n gloo-system
# 설치 확인
kubectl get crd | grep 'networking.k8s.io'
kubectl get crd | grep -v 'networking.k8s.io'
kubectl get pod,svc,endpointslices -n gloo-system
#
kubectl explain gatewayclasses
kubectl get gatewayclasses
NAME CONTROLLER ACCEPTED AGE
gloo-gateway solo.io/gloo-gateway True 21m
kubectl get gatewayclasses -o yaml
apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1
kind: GatewayClass
metadata:
labels:
app: gloo
name: gloo-gateway
spec:
controllerName: solo.io/gloo-gateway
...
Install Httpbin Application : A simple HTTP Request & Response Service - Link
#
watch -d kubectl get pod,svc,endpointslices,ep -n httpbin
# Install Httpbin Application
kubectl apply -f https://raw.githubusercontent.com/solo-io/solo-blog/main/gateway-api-tutorial/01-httpbin-svc.yaml
# 설치 확인
kubectl get deploy,pod,svc,endpointslices,sa -n httpbin
kubectl rollout status deploy/httpbin -n httpbin
# (옵션) NodePort 설정
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app: httpbin
service: httpbin
name: httpbin
namespace: httpbin
spec:
type: NodePort
ports:
- name: http
port: 8000
targetPort: 80
nodePort: 30000
selector:
app: httpbin
EOF
# (옵션) 로컬 접속 확인
echo "httpbin web - http://localhost:30000" # macOS 사용자
echo "httpbin web - http://192.168.50.10:30000" # Windows 사용자
Gateway API kinds - Docs
- GatewayClass: Defines a set of gateways with common configuration and managed by a controller that implements the class.
- Gateway: Defines an instance of traffic handling infrastructure, such as cloud load balancer.
- HTTPRoute: Defines HTTP-specific rules for mapping traffic from a Gateway listener to a representation of backend network endpoints. These endpoints are often represented as a Service.
Control: Envoy data plane and the Gloo control plane.
- Now we’ll configure a Gateway listener, establish external access to Gloo Gateway, and test the routing rules that are the core of the proxy configuration.
Configure a Gateway Listener
- Let’s begin by establishing a Gateway resource that sets up an HTTP listener on port 8080 to expose routes from all our namespaces. Gateway custom resources like this are part of the Gateway API standard.
# 02-gateway.yaml
kind: Gateway
apiVersion: gateway.networking.k8s.io/v1
metadata:
name: http
spec:
gatewayClassName: gloo-gateway
listeners:
- protocol: HTTP
port: 8080
name: http
allowedRoutes:
namespaces:
from: All
# gateway 리소스 생성
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/02-gateway.yaml
# 확인 : Now we can confirm that the Gateway has been activated
kubectl get gateway -n gloo-system
kubectl get gateway -n gloo-system -o yaml | k neat
apiVersion: v1
items:
- apiVersion: gateway.networking.k8s.io/v1
kind: Gateway
metadata:
name: http
namespace: gloo-system
spec:
gatewayClassName: gloo-gateway
listeners:
- allowedRoutes:
namespaces:
from: All
name: http
port: 8080
protocol: HTTP
...
# You can also confirm that Gloo Gateway has spun up an Envoy proxy instance in response to the creation of this Gateway object by deploying gloo-proxy-http:
kubectl get deployment gloo-proxy-http -n gloo-system
NAME READY UP-TO-DATE AVAILABLE AGE
gloo-proxy-http 1/1 1 1 5m22s
# envoy 사용 확인
kubectl get pod -n gloo-system
kubectl describe pod -n gloo-system |grep Image:
Image: quay.io/solo-io/gloo-envoy-wrapper:1.17.7
Image: quay.io/solo-io/gloo:1.17.7
Image: quay.io/solo-io/gloo-envoy-wrapper:1.17.7
# gloo-proxy-http 서비스는 External-IP는 Pending 상태
kubectl get svc -n gloo-system gloo-proxy-http
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
gloo-proxy-http LoadBalancer 10.96.71.22 <pending> 8080:31555/TCP 2m4s
# gloo-proxy-http NodePort 30001 설정
cat <<EOF | kubectl apply -f -
apiVersion: v1
kind: Service
metadata:
labels:
app.kubernetes.io/instance: http
app.kubernetes.io/managed-by: Helm
app.kubernetes.io/name: gloo-proxy-http
app.kubernetes.io/version: 1.17.7
gateway.networking.k8s.io/gateway-name: http
gloo: kube-gateway
helm.sh/chart: gloo-gateway-1.17.7
name: gloo-proxy-http
namespace: gloo-system
spec:
ports:
- name: http
nodePort: 30001
port: 8080
selector:
app.kubernetes.io/instance: http
app.kubernetes.io/name: gloo-proxy-http
gateway.networking.k8s.io/gateway-name: http
type: LoadBalancer
EOF
kubectl get svc -n gloo-system gloo-proxy-http
Establish External Access to Proxy
# Port Forward
# We will use a simple port-forward to expose the proxy’s HTTP port for us to use.
# (Note that gloo-proxy-http is Gloo’s deployment of the Envoy data plane.)
kubectl port-forward deployment/gloo-proxy-http -n gloo-system 8080:8080 &
Configure Simple Routing with an HTTPRoute
Let’s begin our routing configuration with the simplest possible route to expose the /get operation on httpbin
HTTPRoute is one of the new Kubernetes CRDs introduced by the Gateway API, as documented here. We’ll start by introducing a simple HTTPRoute for our service.
HTTPRoute Spec
- ParentRefs-Define which Gateways this Route wants to be attached to.
- Hostnames (optional)- Define a list of hostnames to use for matching the Host header of HTTP requests.
- Rules-Define a list of rules to perform actions against matching HTTP requests.
- Each rule consists of matches, filters (optional), backendRefs (optional) and timeouts (optional) fields.
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: httpbin
namespace: httpbin
labels:
example: httpbin-route
spec:
parentRefs:
- name: http
namespace: gloo-system
hostnames:
- "api.example.com"
rules:
- matches:
- path:
type: Exact
value: /get
backendRefs:
- name: httpbin
port: 8000
This example attaches to the default Gateway object created for us when we installed Gloo Gateway earlier.
See the gloo-system/http reference in the parentRefs stanza.
The Gateway object simply represents a host:port listener that the proxy will expose to accept ingress traffic.
# Our route watches for HTTP requests directed at the host api.example.com with the request path /get and then forwards the request to the httpbin service on port 8000.
# Let’s establish this route now:
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/03-httpbin-route.yaml
#
kubectl get httproute -n httpbin
NAME HOSTNAMES AGE
httpbin ["api.example.com"] 3m15s
kubectl describe httproute -n httpbin
...
Spec:
Hostnames:
api.example.com
Parent Refs:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: http
Namespace: gloo-system
Rules:
Backend Refs:
Group:
Kind: Service
Name: httpbin
Port: 8000
Weight: 1
Matches:
Path:
Type: Exact
Value: /get
...
Test the Simple Route with Curl
# let’s use curl to display the response with the -i option to additionally show the HTTP response code and headers.
echo "127.0.0.1 api.example.com" | sudo tee -a /etc/hosts
echo "httproute - http://api.example.com:30001/get" # 웹브라우저
혹은
curl -is -H "Host: api.example.com" http://localhost:8080/get # kubectl port-forward 사용 시
HTTP/1.1 200 OK
server: envoy
date: Sun, 06 Oct 2024 07:55:34 GMT
content-type: application/json
content-length: 239
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 25
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "api.example.com",
"User-Agent": "curl/8.7.1",
"X-Envoy-Expected-Rq-Timeout-Ms": "15000"
},
"origin": "10.244.0.11",
"url": "http://api.example.com/get"
}
Note that if we attempt to invoke another valid endpoint /delay on the httpbin service, it will fail with a 404 Not Found error. Why? Because our HTTPRoute policy is only exposing access to /get, one of the many endpoints available on the service. If we try to consume an alternative httpbin endpoint like /delay:
# 호출 응답 왜 그럴까?
curl -is -H "Host: api.example.com" http://localhost:8080/delay/1
HTTP/1.1 404 Not Found
date: Wed, 03 Jul 2024 07:19:21 GMT
server: envoy
content-length: 0
#
echo "httproute - http://api.example.com:30001/delay/1" # 웹브라우저
# nodeport 직접 접속
echo "httproute - http://api.example.com:30000/delay/1" # 1초 후 응답
echo "httproute - http://api.example.com:30000/delay/5" # 5초 후 응답
[정규식 패턴 매칭] Explore Routing with Regex Matching Patterns
Let’s assume that now we DO want to expose other httpbin endpoints like /delay. Our initial HTTPRoute is inadequate, because it is looking for an exact path match with /get.
We’ll modify it in a couple of ways. First, we’ll modify the matcher to look for path prefix matches instead of an exact match. Second, we’ll add a new request filter to rewrite the matched /api/httpbin/ prefix with just a / prefix, which will give us the flexibility to access any endpoint available on the httpbin service. So a path like /api/httpbin/delay/1 will be sent to httpbin with the path /delay/1.
- 예시) /api/httpbin/delay/1 ⇒ /delay/1
# Here are the modifications we’ll apply to our HTTPRoute:
- matches:
# Switch from an Exact Matcher(정확한 매팅) to a PathPrefix (경로 매팅) Matcher
- path:
type: PathPrefix
value: /api/httpbin/
filters:
# Replace(변경) the /api/httpbin matched prefix with /
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
2가지 수정 내용 적용 후 확인
#
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/04-httpbin-rewrite.yaml
# 확인
kubectl describe httproute -n httpbin
...
Spec:
Hostnames:
api.example.com
Parent Refs:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: http
Namespace: gloo-system
Rules:
Backend Refs:
Group:
Kind: Service
Name: httpbin
Port: 8000
Weight: 1
Filters:
Type: URLRewrite
URL Rewrite:
Path:
Replace Prefix Match: /
Type: ReplacePrefixMatch
Matches:
Path:
Type: PathPrefix
Value: /api/httpbin/
Test Routing with Regex Matching Patterns
When we used only a single route with an exact match pattern, we could only exercise the httpbin /get endpoint. Let’s now use curl to confirm that both /get and /delay work as expected.
#
echo "httproute - http://api.example.com:30001/api/httpbin/get" # 웹브라우저
혹은
curl -is -H "Host: api.example.com" http://localhost:8080/api/httpbin/get # kubectl port-forward 사용 시
HTTP/1.1 200 OK
server: envoy
date: Sun, 06 Oct 2024 08:08:09 GMT
content-type: application/json
content-length: 289
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 18 # envoy 가 업스트림 httpbin 요청 처리에 걸리 시간 0.018초
{
"args": {},
"headers": {
"Accept": "*/*",
"Host": "api.example.com",
"User-Agent": "curl/8.7.1",
"X-Envoy-Expected-Rq-Timeout-Ms": "15000",
"X-Envoy-Original-Path": "/api/httpbin/get"
},
"origin": "10.244.0.11",
"url": "http://api.example.com/get"
}
# 아래 NodePort 와 GW API 통한 접속 비교
echo "httproute - http://api.example.com:30001/api/httpbin/get"
echo "httproute - http://api.example.com:30000/api/httpbin/get" # NodePort 직접 접근
---
#
echo "httproute - http://api.example.com:30001/api/httpbin/delay/1" # 웹브라우저
혹은
curl -is -H "Host: api.example.com" http://localhost:8080/api/httpbin/delay/1 # kubectl port-forward 사용 시
HTTP/1.1 200 OK
server: envoy
date: Wed, 03 Jul 2024 07:31:47 GMT
content-type: application/json
content-length: 342
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 1023 # envoy 가 업스트림 httpbin 요청 처리에 걸리 시간 1초 이상
{
"args": {},
"data": "",
"files": {},
"form": {},
"headers": {
"Accept": "*/*",
"Host": "api.example.com",
"User-Agent": "curl/8.6.0",
"X-Envoy-Expected-Rq-Timeout-Ms": "15000",
"X-Envoy-Original-Path": "/api/httpbin/delay/1"
},
"origin": "10.244.0.7",
"url": "http://api.example.com/delay/1"
}
curl -is -H "Host: api.example.com" http://localhost:8080/api/httpbin/delay/2
Perfect! It works just as expected! Note that the /delay operation completed successfully and that the 1-second delay was applied. The response header x-envoy-upstream-service-time: 1023 indicates that Envoy reported that the upstream httpbin service required just over 1 second (1,023 milliseconds) to process the request. In the initial /get operation, which doesn’t inject an artificial delay, observe that the same header reported only 14 milliseconds of upstream processing time.
[업스트림 베어러 토큰을 사용한 변환] Test Transformations with Upstream Bearer Tokens
목적 : 요청을 라우팅하는 백엔드 시스템 중 하나에서 인증해야 하는 요구 사항이 있는 경우는 어떻게 할까요? 이 업스트림 시스템에는 권한 부여를 위한 API 키가 필요하고, 이를 소비하는 클라이언트에 직접 노출하고 싶지 않다고 가정해 보겠습니다. 즉, 프록시 계층에서 요청에 주입할 간단한 베어러 토큰을 구성하고 싶습니다. (정적 API 키 토큰을 직접 주입)
What if we have a requirement to authenticate with one of the backend systems to which we route our requests?
Let’s assume that this upstream system requires an API key for authorization, and that we don’t want to expose this directly to the consuming client. In other words, we’d like to configure a simple bearer token to be injected into the request at the proxy layer.
We can express this in the Gateway API by adding a filter that applies a simple transformation to the incoming request.
This will be applied along with the URLRewrite filter we created in the previous step.
# The new filters stanza in our HTTPRoute now looks like this:
filters:
- type: URLRewrite
urlRewrite:
path:
type: ReplacePrefixMatch
replacePrefixMatch: /
# Add a Bearer token to supply a static API key when routing to backend system
- type: RequestHeaderModifier
requestHeaderModifier:
add:
- name: Authorization
value: Bearer my-api-key
#
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/05-httpbin-rewrite-xform.yaml
#
kubectl describe httproute -n httpbin
...
Spec:
...
Rules:
Backend Refs:
Group:
Kind: Service
Name: httpbin
Port: 8000
Weight: 1
Filters:
Type: URLRewrite
URL Rewrite:
Path:
Replace Prefix Match: /
Type: ReplacePrefixMatch
Request Header Modifier:
Add:
Name: Authorization
Value: Bearer my-api-key
Type: RequestHeaderModifier
Matches:
Path:
Type: PathPrefix
Value: /api/httpbin/
- 동작 테스트
#
echo "httproute - http://api.example.com:30001/api/httpbin/get" # 웹브라우저
혹은
curl -is -H "Host: api.example.com" http://localhost:8080/api/httpbin/get # kubectl port-forward 사용 시
HTTP/1.1 200 OK
server: envoy
date: Sun, 06 Oct 2024 08:20:00 GMT
content-type: application/json
content-length: 332
access-control-allow-origin: *
access-control-allow-credentials: true
x-envoy-upstream-service-time: 11
{
"args": {},
"headers": {
"Accept": "*/*",
"Authorization": "Bearer my-api-key",
"Host": "api.example.com",
"User-Agent": "curl/8.7.1",
"X-Envoy-Expected-Rq-Timeout-Ms": "15000",
"X-Envoy-Original-Path": "/api/httpbin/get"
},
"origin": "10.244.0.11",
"url": "http://api.example.com/get"
}
Migrate
In this section, we’ll explore how a couple of common service migration techniques, dark launches with header-based routing and canary releases with percentage-based routing, are supported by the Gateway API standard.
Configure Two Workloads for Migration Routing
Let’s first establish two versions of a workload to facilitate our migration example. We’ll use the open-source Fake Service to enable this.
- Fake service that can handle both HTTP and gRPC traffic, for testing upstream service communications and testing service mesh and other scenarios.
Let’s establish a v1 of our my-workload service that’s configured to return a response string containing “v1”. We’ll create a corresponding my-workload-v2 service as well.
# You should see the response below, indicating deployments for both v1 and v2 of my-workload have been created in the my-workload namespace.
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/06-workload-svcs.yaml
# v1,v2 2가지 버전 워크로드 확인
kubectl get deploy,pod,svc,endpointslices -n my-workload
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/my-workload-v1 1/1 1 1 77s
deployment.apps/my-workload-v2 1/1 1 1 77s
NAME READY STATUS RESTARTS AGE
pod/my-workload-v1-7577fdcc9d-4cv5r 1/1 Running 0 77s
pod/my-workload-v2-68f84654dd-8725x 1/1 Running 0 77s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/my-workload-v1 ClusterIP 10.96.35.183 <none> 8080/TCP 77s
service/my-workload-v2 ClusterIP 10.96.56.232 <none> 8080/TCP 77s
NAME ADDRESSTYPE PORTS ENDPOINTS AGE
endpointslice.discovery.k8s.io/my-workload-v1-bpzgg IPv4 8080 10.244.0.9 77s
endpointslice.discovery.k8s.io/my-workload-v2-ltp7d IPv4 8080 10.244.0.8 77s
Test Simple V1 Routing
Before we dive into routing to multiple services, we’ll start by building a simple HTTPRoute that sends HTTP requests to host api.example.com whose paths begin with /api/my-workload to the v1 workload:
apiVersion: gateway.networking.k8s.io/v1beta1
kind: HTTPRoute
metadata:
name: my-workload
namespace: my-workload
labels:
example: my-workload-route
spec:
parentRefs:
- name: http
namespace: gloo-system
hostnames:
- "api.example.com"
rules:
- matches:
- path:
type: PathPrefix
value: /api/my-workload
backendRefs:
- name: my-workload-v1
namespace: my-workload
port: 8080
Now apply this route:
#
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/07-workload-route.yaml
#
kubectl get httproute -A
NAMESPACE NAME HOSTNAMES AGE
httpbin httpbin ["api.example.com"] 41m
my-workload my-workload ["api.example.com"] 39s
#
kubectl describe httproute -n my-workload
...
Spec:
Hostnames:
api.example.com
Parent Refs:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: http
Namespace: gloo-system
Rules:
Backend Refs:
Group:
Kind: Service
Name: my-workload-v1
Namespace: my-workload
Port: 8080
Weight: 1
Matches:
Path:
Type: PathPrefix
Value: /api/my-workload
#
curl -is -H "Host: api.example.com" http://localhost:8080/api/my-workload
HTTP/1.1 200 OK
vary: Origin
date: Sun, 06 Oct 2024 08:26:25 GMT
content-length: 294
content-type: text/plain; charset=utf-8
x-envoy-upstream-service-time: 33
server: envoy
{
"name": "my-workload-v1",
"uri": "/api/my-workload",
"type": "HTTP",
"ip_addresses": [
"10.244.0.13"
],
"start_time": "2024-10-06T08:26:25.859900",
"end_time": "2024-10-06T08:26:25.871258",
"duration": "11.359ms",
"body": "Hello From My Workload (v1)!",
"code": 200
}
Simulate a v2 Dark Launch with Header-Based Routing
Dark Launch is a great cloud migration technique that releases new features to a select subset of users to gather feedback and experiment with improvements before potentially disrupting a larger user community.
- Dark Launch : 일부 사용자에게 새로운 기능을 출시하여 피드백을 수집하고 잠재적으로 더 큰 사용자 커뮤니티를 방해하기 전에 개선 사항을 실험하는 훌륭한 클라우드 마이그레이션 기술
We will simulate a dark launch in our example by installing the new cloud version of our service in our Kubernetes cluster, and then using declarative policy to route only requests containing a particular header to the new v2 instance. The vast majority of users will continue to use the original v1 of the service just as before.
- 우리는 Kubernetes 클러스터에 서비스의 새로운 클라우드 버전을 설치한 다음 선언적 정책을 사용하여 특정 헤더를 포함하는 요청만 새 인스턴스로 라우팅하여 예제에서 다크 런치를 시뮬레이션할 것입니다 . 대다수의 사용자는 이전과 마찬가지로 서비스의 v1을 계속 사용할 것 입니다.
rules:
- matches:
- path:
type: PathPrefix
value: /api/my-workload
# Add a matcher to route requests with a v2 version header to v2
# version=v2 헤더값이 있는 사용자만 v2 라우팅
headers:
- name: version
value: v2
backendRefs:
- name: my-workload-v2
namespace: my-workload
port: 8080
- matches:
# Route requests without the version header to v1 as before
# 대다수 일반 사용자는 기존 처럼 v1 라우팅
- path:
type: PathPrefix
value: /api/my-workload
backendRefs:
- name: my-workload-v1
namespace: my-workload
port: 8080
Configure two separate routes, one for v1 that the majority of service consumers will still use, and another route for v2 that will be accessed by specifying a request header with name version and value v2. Let’s apply the modified HTTPRoute:
#
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/08-workload-route-header.yaml
#
kubectl describe httproute -n my-workload
...
Spec:
...
Rules:
Backend Refs:
Group:
Kind: Service
Name: my-workload-v2
Namespace: my-workload
Port: 8080
Weight: 1
Matches:
Headers:
Name: version
Type: Exact
Value: v2
Path:
Type: PathPrefix
Value: /api/my-workload
Backend Refs:
Group:
Kind: Service
Name: my-workload-v1
Namespace: my-workload
Port: 8080
Weight: 1
Matches:
Path:
Type: PathPrefix
Value: /api/my-workload
# Now we’ll test the original route, with no special headers supplied, and confirm that traffic still goes to v1:
curl -is -H "Host: api.example.com" http://localhost:8080/api/my-workload
curl -is -H "Host: api.example.com" http://localhost:8080/api/my-workload | grep body
"body": "Hello From My Workload (v1)!",
# But it we supply the version: v2 header, note that our gateway routes the request to v2 as expected:
curl -is -H "Host: api.example.com" -H "version: v2" http://localhost:8080/api/my-workload
curl -is -H "Host: api.example.com" -H "version: v2" http://localhost:8080/api/my-workload | grep body
Expand V2 Testing with Percentage-Based Routing
After a successful dark-launch, we may want a period where we use a blue-green strategy of gradually shifting user traffic from the old version to the new one. Let’s explore this with a routing policy that splits our traffic evenly, sending half our traffic to v1 and the other half to v2.
- 성공적인 다크 런칭 이후, 우리는 점진적으로 이전 버전에서 새 버전으로 사용자 트래픽을 옮기는 블루-그린 전략을 사용하는 기간을 원할 수 있습니다. 트래픽을 균등하게 분할하고 트래픽의 절반을 로 보내고 v1나머지 절반을 로 보내는 라우팅 정책으로 이를 살펴보겠습니다 v2.
We will modify our HTTPRoute to accomplish this by removing the header-based routing rule that drove our dark launch. Then we will replace that with a 50-50 weight applied to each of the routes, as shown below:
rules:
- matches:
- path:
type: PathPrefix
value: /api/my-workload
# Configure a 50-50 traffic split across v1 and v2 : 버전 1,2 50:50 비율
backendRefs:
- name: my-workload-v1
namespace: my-workload
port: 8080
weight: 50
- name: my-workload-v2
namespace: my-workload
port: 8080
weight: 50
# Apply this 50-50 routing policy with kubectl:
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/09-workload-route-split.yaml
#
kubectl describe httproute -n my-workload
...
# 반복 접속 후 대략 비률 확인
for i in {1..100}; do curl -s -H "Host: api.example.com" http://localhost:8080/api/my-workload/ | grep body; done | sort | uniq -c | sort -nr
for i in {1..200}; do curl -s -H "Host: api.example.com" http://localhost:8080/api/my-workload/ | grep body; done | sort | uniq -c | sort -nr
Debug
Solve a Problem with Glooctl CLI
A common source of Gloo configuration errors is mistyping an upstream reference, perhaps when copy/pasting it from another source but “missing a spot” when changing the name of the backend service target. In this example, we’ll simulate making an error like that, and then demonstrating how glooctl can be used to detect it.
- Gloo 구성 오류의 일반적인 원인은 업스트림 참조를 잘못 입력하는 것입니다. 아마도 다른 소스에서 복사/붙여넣을 때이지만 백엔드 서비스 대상의 이름을 변경할 때 "한 군데를 놓친" 것입니다. 이 예에서 우리는 그런 오류를 만드는 것을 시뮬레이션하고, glooctl그것을 감지하는 데 어떻게 사용할 수 있는지 보여줍니다.
First, let’s apply a change to simulate the mistyping of an upstream config so that it is targeting a non-existent my-bad-workload-v2 backend service, rather than the correct my-workload-v2.
- my-bad-workload-v2 업스트림 구성의 오타를 시뮬레이션하여 올바른 타겟팅하는 대신 존재하지 않는 백엔드 서비스를 타겟팅하도록 변경
# [신규 터미널] 모니터링
kubectl get httproute -n my-workload my-workload -o yaml -w
#
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/10-workload-route-split-bad-dest.yaml
#
kubectl describe httproute -n my-workload
...
Spec:
Hostnames:
api.example.com
Parent Refs:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: http
Namespace: gloo-system
Rules:
Backend Refs:
Group:
Kind: Service
Name: my-workload-v1
Namespace: my-workload
Port: 8080
Weight: 50
Group:
Kind: Service
Name: my-bad-workload-v2
Namespace: my-workload
Port: 8080
Weight: 50
Matches:
Path:
Type: PathPrefix
Value: /api/my-workload
Status:
Parents:
Conditions:
Last Transition Time: 2024-10-06T08:38:25Z
Message: Service "my-bad-workload-v2" not found
Observed Generation: 4
Reason: BackendNotFound
Status: False
Type: ResolvedRefs
Last Transition Time: 2024-10-06T08:25:47Z
Message:
Observed Generation: 4
Reason: Accepted
Status: True
Type: Accepted
Controller Name: solo.io/gloo-gateway
Parent Ref:
Group: gateway.networking.k8s.io
Kind: Gateway
Name: http
Namespace: gloo-system
When we test this out, note that the 50-50 traffic split is still in place. This means that about half of the requests will be routed to my-workload-v1 and succeed, while the others will attempt to use the non-existent my-bad-workload-v2 and fail like this:
#
curl -is -H "Host: api.example.com" http://localhost:8080/api/my-workload
curl -is -H "Host: api.example.com" http://localhost:8080/api/my-workload
HTTP/1.1 500 Internal Server Error
date: Wed, 03 Jul 2024 08:21:11 GMT
server: envoy
content-length: 0
#
for i in {1..100}; do curl -s -H "Host: api.example.com" http://localhost:8080/api/my-workload/ | grep body; done | sort | uniq -c | sort -nr
So we’ll deploy one of the first weapons from the Gloo debugging arsenal, the glooctl check utility. It verifies a number of Gloo resources, confirming that they are configured correctly and are interconnected with other resources correctly. For example, in this case, glooctl will detect the error in the mis-connection between the HTTPRoute and its backend target:
#
docker exec -it myk8s-control-plane bash
-----------------------------------
export PATH=$HOME/.gloo/bin:$PATH
glooctl check
Checking Gateways... OK
Checking Proxies... 1 Errors!
Detected Kubernetes Gateway integration!
Checking Kubernetes GatewayClasses... OK
Checking Kubernetes Gateways... OK
Checking Kubernetes HTTPRoutes... 1 Errors!
Skipping Gloo Instance check -- Gloo Federation not detected.
Error: 2 errors occurred:
* Found proxy with warnings by 'gloo-system': gloo-system gloo-system-http
Reason: warning:
Route Warning: InvalidDestinationWarning. Reason: invalid destination in weighted destination list: *v1.Upstream { blackhole_ns.kube-svc:blackhole-ns-blackhole-cluster-8080 } not found
* HTTPRoute my-workload.my-workload.http status (ResolvedRefs) is not set to expected (True). Reason: BackendNotFound, Message: Service "my-bad-workload-v2" not found
# 원인 관련 정보 확인
kubectl get httproute my-workload -n my-workload -o yaml
...
status:
parents:
- conditions:
- lastTransitionTime: "2023-11-28T21:09:20Z"
message: ""
observedGeneration: 6
reason: BackendNotFound
status: "False"
type: ResolvedRefs
...
# 정상 설정으로 해결 configuration is again clean.
kubectl apply -f https://raw.githubusercontent.com/solo-io/gloo-gateway-use-cases/main/gateway-api-tutorial/09-workload-route-split.yaml
kubectl get httproute my-workload -n my-workload -o yaml
#
glooctl check
...
Observe
Explore Envoy Metrics
Envoy publishes a host of metrics that may be useful for observing system behavior. In our very modest kind cluster for this exercise, you can count over 3,000 individual metrics! You can learn more about them in the Envoy documentation here.
For this 30-minute exercise, let’s take a quick look at a couple of the useful metrics that Envoy produces for every one of our backend targets.
First, we’ll port-forward the Envoy administrative port 19000 to our local workstation:
#
kubectl -n gloo-system port-forward deployment/gloo-proxy-http 19000 &
# 아래 관리 페이지에서 각각 메뉴 링크 클릭 확인
echo "Envoy Proxy Admin - http://localhost:19000"
echo "Envoy Proxy Admin - http://localhost:19000/stats/prometheus"
For this exercise, let’s view two of the relevant metrics from the first part of this exercise: one that counts the number of successful (HTTP 2xx) requests processed by our httpbin backend (or cluster, in Envoy terminology), and another that counts the number of requests returning server errors (HTTP 5xx) from that same backend:
#
curl -s http://localhost:19000/stats | grep -E "(^cluster.kube-svc_httpbin-httpbin-8000_httpbin.upstream.*(2xx|5xx))"
cluster.kube-svc_httpbin-httpbin-8000_httpbin.upstream_rq_2xx: 32
cluster.kube-svc_httpbin-httpbin-8000_httpbin.upstream_rq_5xx: 7
# If we apply a curl request that forces a 500 failure from the httpbin backend, using the /status/500 endpoint, I’d expect the number of 2xx requests to remain the same, and the number of 5xx requests to increment by one:
curl -is -H "Host: api.example.com" http://localhost:8080/api/httpbin/status/500
HTTP/1.1 500 Internal Server Error
server: envoy
date: Wed, 03 Jul 2024 08:30:06 GMT
content-type: text/html; charset=utf-8
access-control-allow-origin: *
access-control-allow-credentials: true
content-length: 0
x-envoy-upstream-service-time: 28
#
curl -s http://localhost:19000/stats | grep -E "(^cluster.httpbin-httpbin-8000_httpbin.upstream.*(2xx|5xx))"
cluster.kube-svc_httpbin-httpbin-8000_httpbin.upstream_rq_2xx: 32
cluster.kube-svc_httpbin-httpbin-8000_httpbin.upstream_rq_5xx: 15
Cleanup
kind delete cluster --name myk8s
[2] + 37292 exit 1 kubecolor -n gloo-system port-forward deployment/gloo-proxy-http 19000
[1] + 27738 exit 1 kubecolor port-forward deployment/gloo-proxy-http -n gloo-system 8080:8080
Deleted nodes: ["myk8s-control-plane"]
6. 기타 Gateway API
Cilium
[OnlineLab] Cilium Gateway API - Link
[OnlineLab] Advanced Gateway API Use Cases - Link
Istio
- Kubernetes Traffic Management: Combining Gateway API with Service Mesh for North-South and East-West Use Cases - Blog
- Istio Gateway API 활용하기 https://devops-james.tistory.com/317
Kong API Gateway
- Kong API Gateway 를 Gateway API 형태 설치 https://mokpolar.tistory.com/68
Envoy Gateway
- Envoy Gateway 사용하여 + 부하분산 (hey 도구 소개 감사합니다!) https://devops-james.tistory.com/320
- Manages Envoy Proxy as a Standalone or Kubernetes-based API Gateway - https://gateway.envoyproxy.io/
7. CoreDNS
추천 링크
추천 영상
'KANS study' 카테고리의 다른 글
[8주차] Cilium CNI (0) | 2024.10.26 |
---|---|
[7주차] Service Mesh : Istio- Mode(Sidecar, Ambient) (0) | 2024.10.13 |
[5주차] LoadBalancer(MetalLB), IPVS (0) | 2024.09.29 |
[4주차] Service : ClusterIP, NodePort (0) | 2024.09.29 |
[3주차] Calico CNI & Mode - #2 (0) | 2024.09.22 |