20.kubernetes笔记 Pod资源调度(三)污点与容忍度

概述: 污点taints是定义在节点之上的键值型属性数据,用于让节点拒绝将Pod调度运行于其上, 除非该Pod对象具有接纳节点污点的容忍度。而容忍度tolerations是定义在 Pod对象上的键值型属性数据,用于配置其可容忍的节点污点,而且调度器仅能将Pod对象调度至其能够容忍该节点污点的节点之上,如图所示
20.kubernetes笔记 Pod资源调度(三)污点与容忍度
文章图片

  • 一个Pod能否被调度到节点上因素有
  • 是否节点有污点
  • 节点上有污点.Pod是否能容忍这个污点
污点和容忍度 污点定义在节点的node Spec中,而容忍度则定义在Pod的podSpec中,它们都是键值型数据,但又都额外支持一个效果effect标记,语法格式为key=value:effect,其中key和value的用法及格式与资源注俯-信息相似, 而effect则用于定义对Pod对象的排斥等级,它主要包含以下三种类型效用标识
  • NoSchedule
    不能容忍此污点的新Pod对象不可调度至当前节点,属于强制型约束关系,节点上现存的Pod对象不受影响。
  • PreferNoSchedule
    的柔性约束版本,即不能容忍此污点的新Pod对象尽量不要调度至当前节点,不过无其他节点可供调度时也允许接受相应的Pod对象。节点上现存的Pod对象不受影响。
  • NoExecute
    不能容忍此污点的新Pod对象不可调度至当前节点,属于强制型约束关系,而且节点上现存的Pod对象因节点污点变动或Pod容忍度变动而不再满足匹配规则时,Pod对象将被驱逐。
在Pod对象上定义容忍度时,它支持两种操作符:一种是等值比较Equal,表示容忍度与污点必须在key、value和effect三者之上完全匹配;另一种是存在性判断Exists,表示二者的key和effect必须完全匹配,而容忍度中的value字段要使用空值。
Pod调度顺序 一个节点可以配置使用多个污点,一个Pod对象也可以有多个容忍度,不过二者在进行匹配检查时应遵循如下逻辑。
  1. 首先处理每个有着与之匹配的容忍度的污点
  2. 不能匹配到的污点上,如果存在一个污点使用了NoSchedule效用标识,则拒绝调度Pod对象至此节点
  3. 不能匹配到的污点上,若没有任何一个使用了NoSchedule效用标识,但至少有一个使用了PreferNoScheduler,则应尽量避免将Pod对象调度至此节点
  4. 如果至少有一个不匹配的污点使用了NoExecute效用标识,则节点将立即驱逐Pod对象,或者不予调度至给定节点;另外,即便容忍度可以匹配到使用了 NoExecute效用标识的污点,若在定义容忍度时还同时使用tolerationSeconds属性定义了容忍时限,则超出时限后其也将被节点驱逐。
使用kubeadm部署的Kubernetes集群,其Master节点将自动添加污点信息以阻止不能容忍此污点的Pod对象调度至此节点,因此,用户手动创建的未特意添加容忍此污点容忍度的Pod对象将不会被调度至此节点
示例1: Pod调度到master 对master:NoSchedule标识容忍
[root@k8s-master Scheduler]#kubectl describe node k8s-master.org#查看master污点 效用标识 ... Taints:node-role.kubernetes.io/master:NoSchedule Unschedulable:false[root@k8s-master Scheduler]# cat tolerations-daemonset-demo.yaml apiVersion: apps/v1 kind: DaemonSet metadata: name: daemonset-demo namespace: default labels: app: prometheus component: node-exporter spec: selector: matchLabels: app: prometheus component: node-exporter template: metadata: name: prometheus-node-exporter labels: app: prometheus component: node-exporter spec: tolerations:#容忍度容忍masterNoSchedule标识 - key: node-role.kubernetes.io/master#是key值 effect: NoSchedule#效用标识 operator: Exists#存在即可 containers: - image: prom/node-exporter:latest name: prometheus-node-exporter ports: - name: prom-node-exp containerPort: 9100 hostPort: 9100[root@k8s-master Scheduler]# kubectl apply -ftolerations-daemonset-demo.yaml [root@k8s-master Scheduler]# kubectl get pod -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES daemonset-demo-7fgnd2/2Running05m15s10.244.91.106k8s-node2.org daemonset-demo-dmd472/2Running05m15s10.244.70.105k8s-node1.org daemonset-demo-jhzwf2/2Running05m15s10.244.42.29k8s-node3.org daemonset-demo-rcjmv2/2Running05m15s10.244.59.16k8s-master.org

示例2: 为节点添加effect效用标识NoExecute 驱逐所有Pod
[root@k8s-master Scheduler]# kubectl taint --help Update the taints on one or more nodes.*A taint consists of a key, value, and effect. As an argument here, it is expressed as key=value:effect. *The key must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to 253 characters. *Optionally, the key can begin with a DNS subdomain prefix and a single '/', like example.com/my-app *The value is optional. If given, it must begin with a letter or number, and may contain letters, numbers, hyphens, dots, and underscores, up to63 characters. *The effect must be NoSchedule, PreferNoSchedule or NoExecute. *Currently taint can only apply to node.Examples:#示例 # Update node 'foo' with a taint with key 'dedicated' and value 'special-user' and effect 'NoSchedule'. # If a taint with that key and effect already exists, its value is replaced as specified. kubectl taint nodes foo dedicated=special-user:NoSchedule# Remove from node 'foo' the taint with key 'dedicated' and effect 'NoSchedule' if one exists. kubectl taint nodes foo dedicated:NoSchedule-# Remove from node 'foo' all the taints with key 'dedicated' kubectl taint nodes foo dedicated-# Add a taint with key 'dedicated' on nodes having label mylabel=X kubectl taint node -l myLabel=Xdedicated=foo:PreferNoSchedule# Add to node 'foo' a taint with key 'bar' and no value kubectl taint nodes foo bar:NoSchedule[root@k8s-master Scheduler]# kubectl get pod -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES daemonset-demo-7ghhd1/1Running023m192.168.113.35k8s-node1 daemonset-demo-cjxd51/1Running023m192.168.12.35k8s-node2 daemonset-demo-lhng41/1Running023m192.168.237.4k8s-master daemonset-demo-x5nhg1/1Running023m192.168.51.54k8s-node3 pod-antiaffinity-required-697f7d764d-69vx40/1Pending08s pod-antiaffinity-required-697f7d764d-7cxp21/1Running08s192.168.51.55k8s-node3 pod-antiaffinity-required-697f7d764d-rpb5r1/1Running08s192.168.12.36k8s-node2 pod-antiaffinity-required-697f7d764d-vf2x81/1Running08s192.168.113.36k8s-node1

  • 为Node 3打上NoExecute效用标签,驱逐Node所有Pod
[root@k8s-master Scheduler]# kubectl taint nodek8s-node3 diskfull=true:NoExecute node/k8s-node3 tainted [root@k8s-master Scheduler]# kubectl describe node k8s-node3 ... CreationTimestamp:Sun, 29 Aug 2021 22:45:43 +0800 Taints:diskfull=true:NoExecute

  • node节点所有Pod已经被驱逐 但因为Pod 定义为每个节点只能存在一个同类型Pod 所以会被挂起,不会被在其它节点创建
[root@k8s-master Scheduler]# kubectl get pod -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES daemonset-demo-7ghhd1/1Running031m192.168.113.35k8s-node1 daemonset-demo-cjxd51/1Running031m192.168.12.35k8s-node2 daemonset-demo-lhng41/1Running031m192.168.237.4k8s-master pod-antiaffinity-required-697f7d764d-69vx40/1Pending07m45s pod-antiaffinity-required-697f7d764d-l86td0/1Pending06m5s pod-antiaffinity-required-697f7d764d-rpb5r1/1Running07m45s192.168.12.36k8s-node2 pod-antiaffinity-required-697f7d764d-vf2x81/1Running07m45s192.168.113.36k8s-node1

  • 删除污点 Pod重新被创建
[root@k8s-master Scheduler]# kubectl taint nodek8s-node3 diskfull- node/k8s-node3 untainted [root@k8s-master Scheduler]# kubectl get pod -o wide NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODEREADINESS GATES daemonset-demo-7ghhd1/1Running034m192.168.113.35k8s-node1 daemonset-demo-cjxd51/1Running034m192.168.12.35k8s-node2 daemonset-demo-lhng41/1Running034m192.168.237.4k8s-master daemonset-demo-m6g260/1ContainerCreating04sk8s-node3 pod-antiaffinity-required-697f7d764d-69vx40/1ContainerCreating010mk8s-node3 pod-antiaffinity-required-697f7d764d-l86td0/1Pending09m1s pod-antiaffinity-required-697f7d764d-rpb5r1/1Running010m192.168.12.36k8s-node2 pod-antiaffinity-required-697f7d764d-vf2x81/1Running010m192.168.113.36k8s-node1

【20.kubernetes笔记 Pod资源调度(三)污点与容忍度】参考文档:
https://www.cnblogs.com/ssgee...

    推荐阅读