蹉跎莫遣韶光老,人生唯有读书好。这篇文章主要讲述K8SCentOS7.x通过kubeadm安装Kubernetes1.15.2相关的知识,希望能为你提供帮助。
(一)、环境
IP地址系统功能
192.168.4.21CentOS7.4Master
192.168.4.20CentOS7.4node1
192.168.4.19CentOS7.4node2
【K8SCentOS7.x通过kubeadm安装Kubernetes1.15.2】(二)、基础环境安装配置(每一台服务器都要执行)
1、关闭防火墙
[root@DEV004021 ~]# systemctl stop firewalld
[root@DEV004021 ~]# systemctl disable firewalld
2、创建/etc/sysctl.d/k8s.conf 文件
[root@DEV004021 ~]# cat< < EOF> /etc/sysctl.d/kubernetes.conf
net.bridge.bridge-nf-call-iptables=1
net.bridge.bridge-nf-call-ip6tables=1
net.ipv4.ip_forward=1
net.ipv4.tcp_tw_recycle=0
vm.swappiness=0 # 禁止使用 swap 空间,只有当系统 OOM 时才允许使用它
vm.overcommit_memory=1 # 不检查物理内存是否够用
vm.panic_on_oom=0 # 开启 OOM
fs.inotify.max_user_instances=8192
fs.inotify.max_user_watches=1048576
fs.file-max=52706963
fs.nr_open=52706963
net.ipv6.conf.all.disable_ipv6=1
net.netfilter.nf_conntrack_max=2310720
EOF
3、把以上配置修改的使其生效。
[root@DEV004021 ~]#modprobe br_netfilter
[root@DEV004021 ~]#sysctl -p /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
net.ipv4.ip_forward = 1
4、关闭虚拟内存
[root@DEV004021 ~]#sudo sed -i /swap/ s/^/#/ /etc/fstab
[root@DEV004021 ~]#sudo swapoff -a
5、安装docker
5.1、删除旧版本的docker
[root@DEV004021 ~]# sudo yum remove docker \\
docker-client \\
docker-client-latest \\
docker-common \\
docker-latest \\
docker-latest-logrotate \\
docker-logrotate \\
docker-selinux \\
docker-engine-selinux \\
docker-engine
5.2、安装必要的工具
[root@DEV004021 ~]# yum install -y yum-utils device-mapper-persistent-data lvm2
5.3、添加yum源的相关软件信息并更新缓存
[root@DEV004021 ~]# yum-config-manager --add-repo http://mirrors.aliyun.com/docker-ce/linux/centos/docker-ce.repo
5.4、更新并安装docker
########查看支持哪些版本
[root@DEV004021 ~]#yum list docker-ce --showduplicates
[root@DEV004021 ~]# yum makecache fast
[root@DEV004021 ~]# yum install docker-ce -y
#####CentOS8需要安装containerd.io,如下:
yum-config-manager--add-repohttps://download.docker.com/linux/centos/docker-ce.repo
dnf install https://download.docker.com/linux/centos/7/x86_64/stable/Packages/containerd.io-1.2.6-3.3.el7.x86_64.rpm
yum install docker-ce docker-ce-cli
5.5、配置镜像加速
[root@localhost ~]# vim /etc/docker/daemon.json
" registry-mirrors" : [" http://hub-mirror.c.163.com" ]
5.6、设置docker服务并做自启动
systemctl enable docker
Created symlink from /etc/systemd/system/multi-user.target.wants/docker.service to /usr/lib/systemd/system/docker.service.
systemctl start docker
6、安装kubelet、kubeadm、kubectl
[root@DEV004021 ~]# cat < < EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
[root@DEV004021 ~]# yum install -y kubelet kubeadm kubectl
systemctl enable --now kubelet
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
(三)、构建Kubernetes集群
1、初始化Master节点(只在master节点执行)。
[root@DEV004021 ~]# kubeadm init \\
--apiserver-advertise-address=192.168.4.21 \\
--image-repository registry.aliyuncs.com/google_containers \\
--kubernetes-version v1.15.2 \\
--service-cidr=10.1.0.0/16 \\
--pod-network-cidr=10.244.0.0/16
####--pod-network-cidr :后续安装 flannel 的前提条件,且值为 10.244.0.0/16。--image-repository :指定镜像仓库这里是阿里云的仓库
2、查看输出日志如下,出现初始化成功了。
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the " cluster-info" ConfigMap in the " kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run " kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 192.168.4.21:6443 --token dmzz6x.t864anv0btkyxjwi \\
--discovery-token-ca-cert-hash sha256:2a8bbdd54dcc01435be1a3b443d33d0ce932c8d81c6d9ae8b3c248325977ceb1
3、依次执行如下命令:
[root@DEV004021 ~]# mkdir -p $HOME/.kube
[root@DEV004021 ~]# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
[root@DEV004021 ~]# sudo chown $(id -u):$(id -g) $HOME/.kube/config
4、部署Pod Network到集群中
[root@K8S-Master opt]# wgethttps://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
--2020-03-10 17:05:12--https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 151.101.76.133
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|151.101.76.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 14416 (14K) [text/plain]
Saving to: ‘kube-flannel.yml’
kube-flannel.yml100%[====================================================================================================> ]14.08K--.-KB/sin 0.04s
2020-03-10 17:05:13 (330 KB/s) - ‘kube-flannel.yml’ saved [14416/14416]
[root@K8S-Master opt]# kubectl apply -f kube-flannel.yml
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel unchanged
clusterrolebinding.rbac.authorization.k8s.io/flannel unchanged
serviceaccount/flannel unchanged
configmap/kube-flannel-cfg configured
daemonset.apps/kube-flannel-ds-amd64 created
daemonset.apps/kube-flannel-ds-arm64 created
daemonset.apps/kube-flannel-ds-arm created
daemonset.apps/kube-flannel-ds-ppc64le created
daemonset.apps/kube-flannel-ds-s390x created
5、至此master节点初始化完毕,查看集群相关信息。
######查看集群相关信息
[root@otrs004021 ~]# kubectl cluster-info
Kubernetes master is running at https://192.168.4.21:6443
KubeDNS is running at https://192.168.4.21:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use kubectl cluster-info dump.
#####查看节点相关信息
[root@otrs004021 ~]# kubectl get nodes
NAMESTATUSROLESAGEVERSION
otrs004097Readymaster6m27sv1.15.2
#############查看pods信息
[root@otrs004021 ~]# kubectl get pods --all-namespaces
NAMESPACENAMEREADYSTATUSRESTARTSAGE
kube-systemcoredns-bccdc95cf-f5wtc1/1Running06m32s
kube-systemcoredns-bccdc95cf-lnp2j1/1Running06m32s
kube-systemetcd-otrs0040971/1Running05m56s
kube-systemkube-apiserver-otrs0040971/1Running05m38s
kube-systemkube-controller-manager-otrs0040971/1Running05m40s
kube-systemkube-flannel-ds-amd64-xqdcf1/1Running02m10s
kube-systemkube-proxy-2lz961/1Running06m33s
kube-systemkube-scheduler-otrs0040971/1Running05m45s
###################初始化出现问题,使用如下命令进行重置
[root@DEV004021 ~]# kubeadm reset
[root@DEV004021 ~]# rm -rf /var/lib/cni/
[root@DEV004021 ~]# rm -f $HOME/.kube/config
(四)、添加kubernetes其他节点,有两种方法。
方法一、使用master节点初始化的token加入
[root@DEV004021 ~]# kubeadm join 192.168.4.21:6443 --token dmzz6x.t864anv0btkyxjwi \\
--discovery-token-ca-cert-hash sha256:2a8bbdd54dcc01435be1a3b443d33d0ce932c8d81c6d9ae8b3c248325977ceb1
方法二、重新生成token来加入
[root@otrs004021 ~]# kubeadm token generate
3o7wop.z2kxzhy7p0zwnb3v
[root@otrs004021 ~]# kubeadm token create 3o7wop.z2kxzhy7p0zwnb3v--print-join-command --ttl=24h
kubeadm join 192.168.4.21:6443 --token 3o7wop.z2kxzhy7p0zwnb3v--discovery-token-ca-cert-hash sha256:2a8bbdd54dcc01435be1a3b443d33d0ce932c8d81c6d9ae8b3c248325977ceb1
2、在其他节点依次执行如下命令即可加入K8S
[root@DEV004021 ~]# kubeadm join 192.168.4.21:6443 --token 3o7wop.z2kxzhy7p0zwnb3v--discovery-token-ca-cert-hash sha256:2a8bbdd54dcc01435be1a3b443d33d0ce932c8d81c6d9ae8b3c248325977ceb1
[root@DEV004021 yum.repos.d]# kubectl get nodes
NAMESTATUSROLESAGEVERSION
dev004019Ready< none> 3dv1.15.2
dev004020Ready< none> 3dv1.15.2
dev004021Readymaster3dv1.15.2
至此,1个Master+2 nodes的K8S集群创建成功
(五)、从K8S集群中移除节点(以node2节点被移除为例)
1、在master节点上执行
kubectl drain node2 --delete-local-data --force --ignore-daemonsets
kubectl delete node node2
2、在被移除的node2节点上执行
kubeadm reset
ifconfig cni0 down
ip link delete cni0
ifconfig flannel.1 down
ip link delete flannel.1
rm -rf /var/lib/cni/
3、在node1节点上执行
kubectl delete node node2
推荐阅读
- #yyds干货盘点# linux加入域环方法
- centos 添加多个pip源
- SangFor授权上架网络配置(AC12.0.46)
- JavaScript之预编译学习(内含多个面试题) #yyds干货盘点#
- 使用VM虚拟机安装CentOS-stream系统
- 带你了解HTML基本标签的使用#yyds干货盘点#
- 微软与维珍航空圣诞节福利狂送Windows 8平板
- Win8系统下打开磁盘管理器的3种办法【图文】
- 找到win8旗舰版64位系统失去的桌面磁贴【图文】