Istio-流量控制

Istio-流量控制
文章图片
场景1: 客户端 将流量分配到不同服务 virtual service 分配流量
使用busybox 客户端 流量通过vs 按照8比2分配到服务httpd和服务tomcat

Istio-流量控制
文章图片
yaml

  • client
  • deployment
  • svc
  • vs
Istio-流量控制
文章图片
client.yaml
apiVersion: apps/v1 kind: Deployment metadata: name: client namespace: xxm spec: replicas: 1 # pod数量 selector: matchLabels: # pod 选择器 app: client template: metadata: labels: # 定义 label 以供 Pod 选择器选择 app: client spec: containers: - name: busybox image: busybox imagePullPolicy: IfNotPresent command: ["/bin/sh","-c","sleep 3600"]

deployment
# Deployment apiVersion: apps/v1 kind: Deployment metadata: name: httpd namespace: xxm labels: server: httpd app: web spec: replicas: 1 # pod数量 selector: matchLabels: # pod 选择器 server: httpd app: web template: metadata: name: httpd labels: # 定义 label 以供 Pod 选择器选择 server: httpd app: web spec: containers: - image: busybox imagePullPolicy: IfNotPresent name: busybox # pod 中的容器名称 command: ["/bin/sh","-c","echo'hello httpd' > /var/www/index.html; httpd -f -p 8080 -h /var/www"]---# tomcat apiVersion: apps/v1 kind: Deployment metadata: name: tomcat namespace: xxm labels: server: tomcat app: web spec: replicas: 1 # pod数量 selector: matchLabels: # pod 选择器 server: tomcat app: web template: metadata: name: tomcat labels: # 定义 label 以供 Pod 选择器选择 server: tomcat app: web spec: containers: - image: kubeguide/tomcat-app:v1 imagePullPolicy: IfNotPresent name: tomcat # pod 中的容器名称

service
apiVersion: v1 kind: Service metadata: name: tomcat-svc namespace: xxm spec: ports: - name: tomcat port: 8080# SVC 暴露的端口 targetPort: 8080 protocol: TCP selector: # pod 选择器 server: tomcat--- apiVersion: v1 kind: Service metadata: name: httpd-svc namespace: xxm spec: ports: - name: http port: 8080# SVC 暴露的端口 targetPort: 8080 protocol: TCP selector: # pod 选择器 server: httpd---apiVersion: v1 kind: Service metadata: name: web-svc namespace: xxm spec: ports: - name: http port: 8080# SVC 暴露的端口 targetPort: 8080 protocol: TCP selector: # pod 选择器 app: web

VS
apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name:web-svc-vs namespace: xxm spec: hosts: - web-svc http: - route: - destination: host: tomcat-svc # 转发的目标 SVC weight: 20 - destination: host: httpd-svc weight: 80

执行
在yaml目录下执行 kubectl apply -f <(istioctl kube-inject -f xxm-client.yaml)kubectl apply -f <(istioctl kube-inject -f xxm-deploy.yaml)kubectl apply -f xxm-svc.yamlkubectl apply -f xxm-vs-traffic.yaml

查看端口验证是否成功启动
kubectl get endpoints -n xxm

Istio-流量控制
文章图片
//httpd-svc 按照实际ip curl 192.168.82.232:8080

Istio-流量控制
文章图片
//tomcat-svc 按照实际ip curl 192.168.82.235:8080

Istio-流量控制
文章图片

启动成功
进入busybox验svc
kubectl exec -it -n xxm client-549d4564bf-bnpp4 -- sh

wget -q -O - http://httpd-svc:8080

Istio-流量控制
文章图片
wget -q -O - tomcat-svc:8080

Istio-流量控制
文章图片
在busybox里 vs验证流量分配
//vs wget -q -O - http://web-svc:8080

Istio-流量控制
文章图片

可以看出 大部分流量进入httpd
场景2 根据header 内容分配到相应的服务 修改virual service 如果header 有xxm进入tomcat
修改vs
vixxm-vs-match.yaml

# header 完全匹配key end-user = xxm 路由到tomcat apiVersion: networking.istio.io/v1alpha3 kind: VirtualService metadata: name:web-svc-vs namespace: xxm spec: hosts: - web-svc http: - match: - headers: end-user: exact: xxm #- uri: #prefix: /index.html route: - destination: host: tomcat-svc # 转发的目标 SV - route: - destination: host: httpd-svc

kubectl apply -fxxm-vs-match.yaml

进入busybox client
kubectl exec -it -n xxm client-549d4564bf-bnpp4 -- shwget -q -O - web-svc:8080wget -q -O - web-svc:8080 --header 'end-user:xxm'

Istio-流量控制
文章图片
【Istio-流量控制】验证成功 只有header 里end-user:xxm 才进入tomcat

    推荐阅读