二进制平滑升级Kubernets集群

升级说明 【二进制平滑升级Kubernets集群】Kubernetes 集群小版本升级基本上是只需要 要更新二进制文件即可。如果大版本升级需要注意kubelet参数的变化,以及其他组件升级之后的变化。 由于Kubernete版本更新过快许多依赖并没有解决完善,并不建议生产环境使用较新版本。

生产环境升级建议再测试环境反复测试验证
https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG/CHANGELOG-1.21.md#downloads-for-v1217
组件 升级前 升级后
Etcd 3.4.13 无需升级
kube-apiserver 1.20.13 1.21.7
kube-scheduler 1.20.13 1.21.7
kube-controller-manager 1.20.13 1.21.7
kubectl 1.20.13 1.21.7
kube-proxy 1.20.13 1.21.7
kubelet 1.20.13 1.21.7
calico 3.15.3 3.21.1
coredns 1.7.0 1.8.6
升级Master Master01
# 备份 [root@k8s-master01 ~]# cd /usr/local/bin/ [root@k8s-master01 bin]# cp kube-apiserver kube-controller-manager kube-scheduler kubectl /tmp# 下载安装包 [root@k8s-master01 ~]# wget https://storage.googleapis.com/kubernetes-release/release/v1.21.7/kubernetes-server-linux-amd64.tar.gz [root@k8s-master01 ~]# tar zxf kubernetes-server-linux-amd64.tar.gz# 停止服务 [root@k8s-master01 ~]# systemctl stop kube-apiserver [root@k8s-master01 ~]# systemctl stop kube-controller-manager [root@k8s-master01 ~]# systemctl stop kube-scheduler # 更新包 [root@k8s-master01 bin]# cd /root/kubernetes/server/bin/ [root@k8s-master01 bin]# ll total 1075472 -rwxr-xr-x 1 root root50810880 Nov 17 22:52 apiextensions-apiserver -rwxr-xr-x 1 root root44867584 Nov 17 22:52 kubeadm -rwxr-xr-x 1 root root48586752 Nov 17 22:52 kube-aggregator -rwxr-xr-x 1 root root 122204160 Nov 17 22:52 kube-apiserver -rw-r--r-- 1 root root8 Nov 17 22:50 kube-apiserver.docker_tag -rw------- 1 root root 126985216 Nov 17 22:50 kube-apiserver.tar -rwxr-xr-x 1 root root 116404224 Nov 17 22:52 kube-controller-manager -rw-r--r-- 1 root root8 Nov 17 22:50 kube-controller-manager.docker_tag -rw------- 1 root root 121185280 Nov 17 22:50 kube-controller-manager.tar -rwxr-xr-x 1 root root46669824 Nov 17 22:52 kubectl -rwxr-xr-x 1 root root55317704 Nov 17 22:52 kubectl-convert -rwxr-xr-x 1 root root 118390192 Nov 17 22:52 kubelet -rwxr-xr-x 1 root root43376640 Nov 17 22:52 kube-proxy -rw-r--r-- 1 root root8 Nov 17 22:50 kube-proxy.docker_tag -rw------- 1 root root 105378304 Nov 17 22:50 kube-proxy.tar -rwxr-xr-x 1 root root47349760 Nov 17 22:52 kube-scheduler -rw-r--r-- 1 root root8 Nov 17 22:50 kube-scheduler.docker_tag -rw------- 1 root root52130816 Nov 17 22:50 kube-scheduler.tar -rwxr-xr-x 1 root root1593344 Nov 17 22:52 mounter [root@k8s-master01 bin]# cp kube-apiserver kube-scheduler kubectl kube-controller-manager /usr/local/bin/ cp: overwrite ‘/usr/local/bin/kube-apiserver’? y cp: overwrite ‘/usr/local/bin/kube-scheduler’? y cp: overwrite ‘/usr/local/bin/kubectl’? y cp: overwrite ‘/usr/local/bin/kube-controller-manager’? y# 启动服务 [root@k8s-master01 bin]# systemctl start kube-apiserver [root@k8s-master01 bin]# systemctl start kube-controller-manager [root@k8s-master01 bin]# systemctl start kube-scheduler# 检查服务,查看是否有Error [root@k8s-master01 bin]# systemctl status kube-apiserver [root@k8s-master01 bin]# systemctl status kube-controller-manager [root@k8s-master01 bin]# systemctl status kube-scheduler# 检查版本 [root@k8s-master01 bin]# kube-apiserver --version Kubernetes v1.21.7 [root@k8s-master01 bin]# kube-scheduler --version Kubernetes v1.21.7 [root@k8s-master01 bin]# kube-controller-manager --version Kubernetes v1.21.7

Master02
# 停止服务 [root@k8s-master02 ~]# systemctl stop kube-apiserver [root@k8s-master02 ~]# systemctl stop kube-controller-manager [root@k8s-master02 ~]# systemctl stop kube-scheduler# 更新包 [root@k8s-master01 bin]# scp kube-apiserver kube-controller-manager kube-scheduler root@k8s-master02:/usr/local/bin kube-apiserver100%117MB72.2MB/s00:01 kube-controller-manager100%111MB74.3MB/s00:01 kube-scheduler100%45MB71.1MB/s00:00 # 启动服务 [root@k8s-master02 ~]# systemctl start kube-apiserver [root@k8s-master02 ~]# systemctl start kube-controller-manager [root@k8s-master02 ~]# systemctl start kube-scheduler# 检查服务,查看是否有Error [root@k8s-master02 ~]# systemctl status kube-apiserver [root@k8s-master02 ~]# systemctl status kube-controller-manager [root@k8s-master02 ~]# systemctl status kube-scheduler# 检查版本 [root@k8s-master02 ~]# kube-apiserver --version Kubernetes v1.21.7 [root@k8s-master02 ~]# kube-scheduler --version Kubernetes v1.21.7 [root@k8s-master02 ~]# kube-controller-manager --version Kubernetes v1.21.7

Master03
# 停止服务 [root@k8s-master03 ~]# systemctl stop kube-apiserver [root@k8s-master03 ~]# systemctl stop kube-controller-manager [root@k8s-master03 ~]# systemctl stop kube-scheduler# 更新包 [root@k8s-master01 bin]# scp kube-apiserver kube-controller-manager kube-scheduler root@k8s-master03:/usr/local/bin kube-apiserver100%117MB72.2MB/s00:01 kube-controller-manager100%111MB74.3MB/s00:01 kube-scheduler100%45MB71.1MB/s00:00 # 启动服务 [root@k8s-master03 ~]# systemctl start kube-apiserver [root@k8s-master03 ~]# systemctl start kube-controller-manager [root@k8s-master03 ~]# systemctl start kube-scheduler# 检查服务,查看是否有Error [root@k8s-master03 ~]# systemctl status kube-apiserver [root@k8s-master03 ~]# systemctl status kube-controller-manager [root@k8s-master03 ~]# systemctl status kube-scheduler# 检查版本 [root@k8s-master03 ~]# kube-apiserver --version Kubernetes v1.21.7 [root@k8s-master03 ~]# kube-scheduler --version Kubernetes v1.21.7 [root@k8s-master03 ~]# kube-controller-manager --version Kubernetes v1.21.7

升级Node
建议低峰时间,平滑升级哟
设置不可调度
[root@k8s-master01 ~]# kubectl cordon k8s-node01 node/k8s-node01 cordoned [root@k8s-master01 ~]# kubectl get nodes NAMESTATUSROLESAGEVERSION k8s-master01Ready2d21hv1.20.13 k8s-master02Ready2d21hv1.20.13 k8s-master03Ready2d21hv1.20.13 k8s-node01Ready,SchedulingDisabled2d18hv1.20.13 k8s-node02Ready2d18hv1.20.13

驱逐 Pod
[root@k8s-master01 ~]# kubectl drain k8s-node01 --delete-local-data --ignore-daemonsets --force Flag --delete-local-data has been deprecated, This option is deprecated and will be deleted. Use --delete-emptydir-data. node/k8s-node01 already cordoned WARNING: deleting Pods not managed by ReplicationController, ReplicaSet, Job, DaemonSet or StatefulSet: default/busybox; ignoring DaemonSet-managed Pods: kube-system/calico-node-jrc6b evicting pod default/busybox pod/busybox evicted

  • –delete-local-data: 即使pod使用了emptyDir也删除
  • –ignore-daemonsets: 忽略deamonset控制器的pod,如果不忽略,deamonset控制器控制的pod被删除后可能马上又在此节点上启动起来,会成为死循环;
  • –force: 不加force参数只会删除该NODE上由ReplicationController, ReplicaSet, DaemonSet,StatefulSet or Job创建的Pod,加了后还会删除’裸奔的pod’(没有绑定到任何replication controller)
停止服务
[root@k8s-node01 ~]# systemctl stop kube-proxy [root@k8s-node01 ~]# systemctl stop kubelet

备份包
[root@k8s-node01 ~]# mv /usr/local/bin/kubelet /tmp [root@k8s-node01 ~]# mv /usr/local/bin/kube-proxy /tmp

更新包
[root@k8s-master01 bin]# scp kubelet kube-proxy root@k8s-node01:/usr/local/bin

启动服务
[root@k8s-node01 ~]# systemctl start kubelet [root@k8s-node01 ~]# systemctl start kube-proxy

设置可调度
[root@k8s-master01 bin]# kubectl uncordon k8s-node01 node/k8s-node01 uncordoned

验证升级
# 如下可以查看到k8s-node01已升级到v1.21.7 [root@k8s-master01 ~]# kubectl get nodes NAMESTATUSROLESAGEVERSION k8s-master01Ready3dv1.20.13 k8s-master02Ready3dv1.20.13 k8s-master03Ready3dv1.20.13 k8s-node01Ready2d20hv1.21.7 k8s-node02Ready2d20hv1.20.13

calico 升级
没有特殊需求,一般不建议升级
根据您的数据存储和节点数量,选择下面安装方式:
  • 使用 Kubernetes API 数据存储安装 Calico,50 个或更少节点
  • 使用 Kubernetes API 数据存储安装 Calico,超过 50 个节点
  • 使用 etcd 数据存储安装 Calico
备份
# 备份secret [root@k8s-master01 bak]# kubectl get secret-n kube-system calico-node-token-d6ck2 -o yaml > calico-node-token-d6ck2.yml [root@k8s-master01 bak]# kubectl get secret-n kube-system calico-kube-controllers-token-r4v8n -o yaml > calico-kube-controllers-token.yml [root@k8s-master01 bak]# kubectl get secret-n kube-system calico-etcd-secrets -o yaml > calico-etcd-secrets.yml# 备份configmap [root@k8s-master01 bak]# kubectl get cm -n kube-system calico-config -o yaml >calico-config.yaml# 备份ClusterRole [root@k8s-master01 bak]# kubectl get clusterrole calico-kube-controllers -o yaml > calico-kube-controllers-cr.yml [root@k8s-master01 bak]# kubectl get clusterrole calico-node -o yaml > calico-node-cr.yml# 备份ClusterRoleBinding [root@k8s-master01 bak]# kubectl get ClusterRoleBinding calico-kube-controllers -o yaml > calico-kube-controllers-crb.yml [root@k8s-master01 bak]# kubectl get ClusterRoleBinding calico-node -o yaml > calico-node-crb.yml# 备份DaemonSet [root@k8s-master01 bak]# kubectl get DaemonSet -n kube-system calico-node -o yaml > calico-node-daemonset.yml# 备份ServiceAccount [root@k8s-master01 bak]# kubectl get sa -n kube-systemcalico-kube-controllers -o yaml > calico-kube-controllers-sa.yml [root@k8s-master01 bak]# kubectl get sa -n kube-systemcalico-node -o yaml > calico-node-sa.yml# 备份Deployment [root@k8s-master01 bak]# kubectl get deployment -n kube-system calico-kube-controllers -o yaml > calico-kube-controllers-deployment.yml

更新
[root@k8s-master01 ~]# curl https://docs.projectcalico.org/manifests/calico-etcd.yaml -o calico-etcd.yaml# 修改配置 sed -i 's#etcd_endpoints: "http://:"#etcd_endpoints: "https://192.168.1.101:2379,https://192.168.1.102:2379,https://192.168.1.103:2379"#g' calico-etcd.yamlETCD_CA=`cat /etc/kubernetes/pki/etcd/etcd-ca.pem | base64 | tr -d '\n'` ETCD_CERT=`cat /etc/kubernetes/pki/etcd/etcd.pem | base64 | tr -d '\n'` ETCD_KEY=`cat /etc/kubernetes/pki/etcd/etcd-key.pem | base64 | tr -d '\n'`sed -i "s@# etcd-key: null@etcd-key: ${ETCD_KEY}@g; s@# etcd-cert: null@etcd-cert: ${ETCD_CERT}@g; s@# etcd-ca: null@etcd-ca: ${ETCD_CA}@g" calico-etcd.yamlsed -i 's#etcd_ca: ""#etcd_ca: "/calico-secrets/etcd-ca"#g; s#etcd_cert: ""#etcd_cert: "/calico-secrets/etcd-cert"#g; s#etcd_key: "" #etcd_key: "/calico-secrets/etcd-key" #g' calico-etcd.yaml# 更改此处为自己的pod网段 POD_SUBNET="172.16.0.0/12"sed -i 's@# - name: CALICO_IPV4POOL_CIDR@- name: CALICO_IPV4POOL_CIDR@g; s@#value: "192.168.0.0/16"@value: '"${POD_SUBNET}"'@g' calico-etcd.yaml$ kubectl apply -f calico.yaml secret/calico-etcd-secrets unchanged configmap/calico-config configured clusterrole.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers unchanged clusterrole.rbac.authorization.k8s.io/calico-node configured clusterrolebinding.rbac.authorization.k8s.io/calico-node unchanged daemonset.apps/calico-node configured serviceaccount/calico-node unchanged deployment.apps/calico-kube-controllers configured serviceaccount/calico-kube-controllers unchanged

coredns 升级
[root@k8s-master01 ~]# git clone https://github.com/coredns/deployment.git Cloning into 'deployment'... [root@k8s-master01 ~]# cd deployment/kubernetes/ [root@k8s-master01 kubernetes]# ./deploy.sh -s -i 10.96.0.10 | kubectl apply -f -# 查看 [root@k8s-master01 ~]# kubectl get pod -n kube-system coredns-86f4cdc7bc-ccgr5 -o yaml | grep image image: coredns/coredns:1.8.6 imagePullPolicy: IfNotPresent image: coredns/coredns:1.8.6 imageID: docker-pullable://coredns/coredns@sha256:5b6ec0d6de9baaf3e92d0f66cd96a25b9edbce8716f5f15dcd1a616b3abd590e

点击 " 阅读原文" 获取更好的阅读体验!

    推荐阅读