要须心地收汗马,孔孟行世目杲杲。这篇文章主要讲述使用kubeadm搭建生产级别k8s集群相关的知识,希望能为你提供帮助。
概念
kubeadm是在现有基础上引导kubernetes集群并执行一系列基本的维护任务,其不会涉及服务器底层基础环境的构建,而只是为集群添加必要的核心组件CoreNDS和kube-proxy。Kubeadm的核心工具为kubeadm init和kubeadm join,前者用于创建新的控制平面,后者用于将节点快速的接入到控制平面,利用这两个工具可以快速的初始化一个生产级别的k8s集群。部署架构
环境准备
系统环境
- OS:CentOS 7.6
- Docker Version:19.03.8
- Kubernetes:1.18.2
IP地址 | 【使用kubeadm搭建生产级别k8s集群】 主机名 | 角色 |
192.168.248.150 | k8s-master-01 | master |
192.168.248.151 | k8s-master-02 | master |
192.168.248.152 | k8s-master-03 | master |
192.168.248.153 | k8s-node-01 | worker |
192.168.248.154 | k8s-node-02 | worker |
192.168.248.155 | k8s-node-03 | worker |
- 升级系统内核到最新
- 关闭系统防火墙和Selinux
- 主机名解析
- 时间同步
- 禁用Swap
添加阿里docker源并安装
# yum install -y yum-utils device-mapper-persistent-data lvm2
# yum-config-manager --add-repo https://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
# yum makecache fast
# yum -y install docker-ce
# systemctl enable docker & & systemctl start docker
配置docker镜像加速及存储驱动
# cat /etc/docker/daemon.json
"exec-opts": ["native.cgroupdriver=systemd"],
"log-driver": "json-file",
"log-opts":
"max-size": "100m"
,
"storage-driver": "overlay2",
"registry-mirrors": ["https://registry.docker-cn.com","https://docker.mirrors.ustc.edu.cn"]
# 重启docker使配置生效
修改系统参数文件描述符
# echo "* soft nofile 65536" > > /etc/security/limits.conf
# echo "* hard nofile 65536" > > /etc/security/limits.conf
# echo "* soft nproc 65536"> > /etc/security/limits.conf
# echo "* hard nproc 65536"> > /etc/security/limits.conf
# echo "* softmemlockunlimited"> > /etc/security/limits.conf
# echo "* hard memlockunlimited"> > /etc/security/limits.conf
修改内核参数
# modprobe overlay
# modprobe br_netfilter
# cat > /etc/sysctl.d/99-kubernetes-cri.conf < < EOF
net.bridge.bridge-nf-call-iptables= 1
net.ipv4.ip_forward= 1
net.bridge.bridge-nf-call-ip6tables = 1
EOF
sysctl --system
安装kubelet、kubeadm、kubectl安装可参考??安装链接??
# cat < < EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
# yum install -y kubelet kubeadm kubectl
# systemctl enable kubelet
部署化控制平面初始化控制平面
# kubeadm init \\
--image-repository registry.aliyuncs.com/google_containers \\
--kubernetes-version v1.18.2 \\
--control-plane-endpoint k8s-api-server \\
--apiserver-advertise-address 192.168.248.151 \\
--pod-network-cidr 10.244.0.0/16 \\
--token-ttl 0
参数说明
- --image-repository指定国内镜像源地址。由于国内网络问题,无法拉取到国外镜像站点的镜像。
- --kubernetes-version指定版本
- --control-plane-endpoint指定控制平面的固定访问点,可以为ip或者DNS域名。被用于集群管理员和集群组件的kubeconfig配置文件的API Server的访问地址。
- --apiserver-advertise-addressapiserver通告给其他组件的ip地址,一般为该master节点的用于集群内部通信的ip地址
- --pod-network-cidrpod网络的地址范围,flannel插件默认为:10.244.0.0/16,calico插件默认为:192.168.0.0/16。
- --token-ttl共享令牌过期时间,默认为24h,0表示永不过期。
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run undefinedkubectl apply -f [podnetwork].yamlundefined with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of control-plane nodes by copying certificate authorities
and service account keys on each node and then running the following as root:
kubeadm join k8s-api-server:6443 --token yonz9r.2025b6wu414sptes \\
--discovery-token-ca-cert-hash sha256:391f19638bbb4c50d57a32b8c5b670ce8cfaddfa4f022384a03d1e8f6462430f \\
--control-plane
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join k8s-api-server:6443 --token yonz9r.2025b6wu414sptes \\
--discovery-token-ca-cert-hash sha256:391f19638bbb4c50d57a32b8c5b670ce8cfaddfa4f022384a03d1e8f6462430f
按照提示进行操作创建目录、提供配置文件及修改属主属组即可完成控制平面的部署。
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
部署flannel网络插件
# kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
添加工作节点
# 分别在所有工作执行如下命令,即可将工作节点接入集群。
# kubeadm join k8s-api-server:6443 --token yonz9r.2025b6wu414sptes \\
--discovery-token-ca-cert-hash sha256:391f19638bbb4c50d57a32b8c5b670ce8cfaddfa4f022384a03d1e8f6462430f
添加其他控制节点控制平面中的节点需要共享Kubernetes CA、Etcd CA和Kube-proxy CA等的证书信息和私钥信息。 做法有两种:
- 手动将第一个控制平面上生成的证书及私钥文件添加到其他master节点
- 借助kubeadm init phase命令
生成用于加入控制平面的secret
# kubeadm init phase upload-certs --upload-certs
W0426 01:58:31.87745472035 configset.go:202] WARNING: kubeadm cannot validate component configs for API groups [kubelet.config.k8s.io kubeproxy.config.k8s.io]
[upload-certs] Storing the certificates in Secret undefinedkubeadm-certsundefined in the undefinedkube-systemundefined Namespace
[upload-certs] Using certificate key:
dab5a1601dae683f429ed43a795ed345120a030681412a419d327c8893d90d74
以上命令生成的secret生命周期为2h,超过时长后,需要重新运行以上生成。
加入控制节点
# kubeadm join k8s-api-server:6443 --token yonz9r.2025b6wu414sptes \\--discovery-token-ca-cert-hash sha256:391f19638bbb4c50d57a32b8c5b670ce8cfaddfa4f022384a03d1e8f6462430f--control-plane --certificate-key dab5a1601dae683f429ed43a795ed345120a030681412a419d327c8893d90d74
# mkdir -p $HOME/.kube
# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
# sudo chown $(id -u):$(id -g) $HOME/.kube/config
集群状态验证(在任意一台控制节点上执行)查看组件状态
# kubectl get cs
NAMESTATUSMESSAGEERROR
schedulerHealthyok
etcd-0Healthyundefinedhealthundefined:undefinedtrueundefined
controller-managerHealthyok
查看工作节点状态
# kubectl get node
NAMESTATUSROLESAGEVERSION
k8s-master-01Readymaster15hv1.18.2
k8s-master-02Readymaster17hv1.18.2
k8s-master-03Readymaster15hv1.18.2
k8s-node-01Ready< none> 16hv1.18.2
k8s-node-02Ready< none> 16hv1.18.2
k8s-node-03Ready< none> 16hv1.18.2
查看pod运维状态
# kubectl get pod -n kube-system
NAMEREADYSTATUSRESTARTSAGE
coredns-7ff77c879f-btz6j1/1Running115h
coredns-7ff77c879f-qzjnm1/1Running115h
etcd-k8s-master-011/1Running215h
etcd-k8s-master-021/1Running117h
etcd-k8s-master-031/1Running115h
kube-apiserver-k8s-master-011/1Running515h
kube-apiserver-k8s-master-021/1Running317h
kube-apiserver-k8s-master-031/1Running115h
kube-controller-manager-k8s-master-011/1Running415h
kube-controller-manager-k8s-master-021/1Running717h
kube-controller-manager-k8s-master-031/1Running115h
kube-flannel-ds-amd64-77mtr1/1Running115h
kube-flannel-ds-amd64-bx4kt1/1Running216h
kube-flannel-ds-amd64-fhfwv1/1Running115h
kube-flannel-ds-amd64-kwfpq1/1Running116h
kube-flannel-ds-amd64-qcd5b1/1Running117h
kube-flannel-ds-amd64-vbp561/1Running216h
kube-proxy-4bgvr1/1Running115h
kube-proxy-7d9441/1Running115h
kube-proxy-878d81/1Running116h
kube-proxy-9qf5j1/1Running116h
kube-proxy-pbg8w1/1Running117h
kube-proxy-wmtn41/1Running115h
kube-scheduler-k8s-master-011/1Running215h
kube-scheduler-k8s-master-021/1Running817h
kube-scheduler-k8s-master-031/1Running115h
推荐阅读
- 第三节:SpringBoot中web项目推荐目录结构
- #yyds干货盘点#还在用策略模式解决 if-else(Map+函数式接口方法才是YYDS!)
- 面向对象编程,不香了吗()
- #yyds干货盘点#编写 if 时尽量不要带 else
- #yyds干货盘点#Python实战案例,PIL模块,Python实现自动化生成倒计时图片
- HarmonyOS 属性动画扩展
- #展望我的2022Flag# 用未来可能会发生的事情推断今天该做的事
- 一个BPMN流程示例带你认识项目中流程的生命周期
- #聊一聊悟空编辑器# 2022新年的悟空编辑器