K8S|K8S 笔记 - 使用 kubeadm 安装 k8s
说明:
本文是个人近期部署 k8s 实验环境的完整笔记,完全照着做就能顺利部署出一套 k8s 集群。
想了解 k8s 更多内容请移步 k8s 官网:
https://kubernetes.io/
想了解 kubeadm 的更多内容请参考:
https://kubernetes.io/docs/re...
1. 部署 k8s 的前期准备
1.1 环境需求
虚拟机:3 台,其中 1 台部署 k8s-master,2 台部署 k8s-slave 节点
操作系统:CentOS 7
硬件需求:CPU 至少 2c ,内存至少 2G
1.2 环境角色
IP | 角色 | 安装组件 |
---|---|---|
192.168.100.20 | k8s-master | kube-apiserver, kube-schduler, kube-controller-manager, docker, flannel, kubelet |
192.168.100.21 | k8s-slave1 | kubelet , kube-proxy , docker , flannel |
192.168.100.22 | k8s-slave2 | kubelet , kube-proxy , docker , flannel |
hostnamectl set-hostname k8s-masterhostnamectl set-hostname k8s-slave1hostnamectl set-hostname k8s-slave2
3. 设置 /etc/hosts(master 和 slave 都执行)
cat < /etc/hosts
192.168.100.20 k8s-master
192.168.100.21 k8s-slave1
192.168.100.22 k8s-slave2
EOF
4. 设置 iptables(master 和 slave 都执行)
iptables -P FORWARD ACCEPT
5. 关闭 swap(master 和 slave 都执行)
swapoff -a
防止开机自动挂载 swap 分区(将 /etc/fstab 中的内容注释掉)
sed -i '/ swap / s/^\(.*\)$/#\1/g' /etc/fstab
6. 关闭 selinux 和 firewalld(master 和 slave 都执行) 将 selinux 由 enforcing 改为 disabled 。注意下面命令中 disabled 前面是数字 1,不是小写的英文字母 l
sed -ri 's#(SELINUX=).*#\1disabled#' /etc/selinux/config
临时关闭 enforce
setenforce 0
验证 enforce 是否已关闭:如关闭,应返回结果:Permissive
getenforce
关闭并禁用防火墙
systemctl stop firewalld && systemctl disable firewalld
查看防火墙是否已关闭
systemctl status firewalld
7. 修改内核参数(master 和 slave 都执行) 第一种方法(推荐!写到独立内核文件中):
cat > /etc/sysctl.d/k8s.conf <
第二种方法(写到默认内核文件中:/etc/sysctl.conf)
cat >> /etc/sysctl.conf <
第三种方法(写到默认内核文件中:/etc/sysctl.conf):
sudo sysctl -wnet.bridge.bridge-nf-call-ip6tables=0
sudo sysctl -wnet.bridge.bridge-nf-call-iptables = 1
sudo sysctl -wnet.ipv4.ip_forward = 1
sudo sysctl -wvm.max_map_count = 262144
sudo sysctl -wvm.swappiness=0
参数说明:
- net.bridge.bridge-nf-call-ip6tables 和 net.bridge.bridge-nf-call-iptables,netfilter实际上既可以在L2层过滤,也可以在L3层过滤的。当值为 0 ,即要求iptables不对bridge的数据进行处理。当值为 1,也就意味着二层的网桥在转发包时也会被iptables的FORWARD规则所过滤,这样就会出现L3层的iptables rules去过滤L2的帧的问题。
- max_map_count,文件包含限制一个进程可以拥有的VMA(虚拟内存区域)的数量。如不设置可能会报错:“max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]”
- net.ipv4.ip_forward,出于安全考虑,Linux系统默认是禁止数据包转发的。所谓转发即当主机拥有多于一块的网卡时,其中一块收到数据包,根据数据包的目的ip地址将数据包发往本机另一块网卡,该网卡根据路由表继续发送数据包。这通常是路由器所要实现的功能。要让Linux系统具有路由转发功能,需要配置 Linux 的内核参数 net.ipv4.ip_forward = 1
- swappiness,等于 0 的时候表示最大限度使用物理内存,然后才是 swap空间,swappiness=100的时候表示积极的使用swap分区,并且把内存上的数据及时的搬运到swap空间里面。
modprobe br_netfilter
sysctl -p /etc/sysctl.d/k8s.conf
8. 配置软件源(master 和 slave 都执行) 8.1 备份本地yum源
mkdir /etc/yum.repos.d/bak
mv /etc/yum.repos.d/CentOS-* /etc/yum.repos.d/bak/
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
8.2 获取阿里yum源配置
wget -O /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
或者
curl -o /etc/yum.repos.d/Centos-7.repo http://mirrors.aliyun.com/repo/Centos-7.repo
wget -O /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/Centos-7.repo
或者
curl -o /etc/yum.repos.d/docker-ce.repo http://mirrors.aliyun.com/docker-ce/linux/centos/Centos-7.repo
8.3 配置 kubernetes 源
cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF
8.4 更新 catch
yum clean all # 清除系统所有的yum缓存
yum makecache # 生成yum缓存
9. 配置 docker 镜像加速(master 和 slave 都执行)
mkdir /etc/docker/
cat < /etc/docker/daemon.json
{
"registry-mirrors": ["https://q3rmdln3.mirror.aliyuncs.com"],
"insecure-registries":["http://192.168.100.20:5000"]
}
EOF
10. 安装 docker(master 和 slave 都执行) 查看可用版本
yum list docker-ce showduplicates | sort -r
安装源里的最新版本
yum install docker-ce -y
启动 docker 并设置开机自启动
systemctl enable docker && systemctl start docker
查看版本
docker --version
或者
docker -v
11. 安装 K8S(master 和 slave 都执行) 安装kubeadm,kubelet和kubectl
所有节点执行!!!!!!
查看可用版本
yum list kubeadm kubelet kubectl showduplicates
由于版本更新频繁,可以指定版本号安装:
yum install -y kubelet-1.16.2 kubeadm-1.16.2 kubectl-1.16.2 --disableexcludes=kubernetes
也可以不指定版本安装,安装最新版本:
yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetes
参数说明:
- --disableexcludes=kubernetes,即安装时只使用 kubernetes 的源,不使用其他源
其他说明: - kubelet 依赖 conntrack 和 socat 。
kubelet --version
kubeadm version
kubectl version
设置 kubelet 开机自启动。否则宿主机重启后 pod 无法自动启动
systemctl enable kubelet
12. 初始化(仅 master 执行) 只在 master 运行!!!!!!
mkdir /data/k8s-install
cd /data/k8s-install
编辑 vim kubeadm.yaml,配置如下(注意几个带 # 注释的地方是需要修改的):
apiVersion: kubeadm.k8s.io/v1beta2
bootstrapTokens:
- groups:
- system:bootstrappers:kubeadm:default-node-token
token: abcdef.0123456789abcdef
ttl: 24h0m0s
usages:
- signing
- authentication
kind: InitConfiguration
localAPIEndpoint:
advertiseAddress: 192.168.100.20 # apiserver 的地址。因为是单节点的 master,所以使用 master 的内网 IP
bindPort: 6443
nodeRegistration:
criSocket: /var/run/dockershim.sock
name: k8s-master
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
---
apiServer:
timeoutForControlPlane: 4m0s
apiVersion: kubeadm.k8s.io/v1beta2
certificatesDir: /etc/kubernetes/pki
clusterName: kubernetes
controllerManager: {}
dns:
type: CoreDNS
etcd:
local:
dataDir: /var/lib/etcd
imageRepository: registry.aliyuncs.com/google_containers # 修改成国内的镜像源,例如阿里镜像源。也可以使用 registry.cn-hangzhou.aliyuncs.com/google_containers
kind: ClusterConfiguration
kubernetesVersion: v1.23.4
networking:
dnsDomain: cluster.local
podSubnet: "10.244.0.0/16" # Pod 网段,flannel 插件需要使用这个网段
serviceSubnet: 10.96.0.0/12
scheduler: {}
查看可用镜像(均以容器的方式安装)
应该有 7 个:kube-apiserver,kube-controller-manager,kube-scheduler,kube-proxy,pause,etcd,coredns
如果找不到镜像,可尝试将 kube.yaml 中配置的 imageRepository 镜像源地址改回谷歌的地址:k8s.gcr.io
[root@k8s-master k8s-install]# kubeadm config images list --config kubeadm.yaml
registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.4
registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
registry.aliyuncs.com/google_containers/kube-proxy:v1.23.4
registry.aliyuncs.com/google_containers/pause:3.6
registry.aliyuncs.com/google_containers/etcd:3.5.1-0
registry.aliyuncs.com/google_containers/coredns:v1.8.6
下载镜像到本地
[root@k8s-master k8s-install]# kubeadm config images pull --config kubeadm.yaml
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-apiserver:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-controller-manager:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-scheduler:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/kube-proxy:v1.23.4
[config/images] Pulled registry.aliyuncs.com/google_containers/pause:3.6
[config/images] Pulled registry.aliyuncs.com/google_containers/etcd:3.5.1-0
[config/images] Pulled registry.aliyuncs.com/google_containers/coredns:v1.8.6
初始化 master 节点:
[root@k8s-master k8s-install]# kubeadm init --config kubeadm.yaml
注意!!!!!!
可能会出现如下报错:
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[kubelet-check] Initial timeout of 40s passed.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.
[kubelet-check] It seems like the kubelet isn't running or healthy.
[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp [::1]:10248: connect: connection refused.Unfortunately, an error has occurred:
timed out waiting for the conditionThis error is likely caused by:
- The kubelet is not running
- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
- 'systemctl status kubelet'
- 'journalctl -xeu kubelet'Additionally, a control plane component may have crashed or exited when started by the container runtime.
To troubleshoot, list all containers using your preferred container runtimes CLI.Here is one example how you may list all Kubernetes containers running in docker:
- 'docker ps -a | grep kube | grep -v pause'
Once you have found the failing container, you can inspect its logs with:
- 'docker logs CONTAINERID'error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
To see the stack trace of this error execute with --v=5 or higher
解决办法(推荐优先进行如下配置,然后再进行初始化。所有节点均需执行!!!):
在 /etc/docker/daemon.json 文件中加入"exec-opts": ["native.cgroupdriver=systemd"] 这一行配置,重启 docker 服务并清除一下 kubeadm 信息即可重新初始化。
[root@k8s-master k8s-install]# cat /etc/docker/daemon.json
{
"exec-opts": ["native.cgroupdriver=systemd"],
"registry-mirrors": ["https://q3rmdln3.mirror.aliyuncs.com"],
"insecure-registries":["http://192.168.100.20:5000"]
}
重启 docker 服务
[root@k8s-master k8s-install]# systemctl restart docker
重置 kubeadm 配置
[root@k8s-master k8s-install]# kubeadm reset -f
重新执行初始化
[root@k8s-master k8s-install]# kubeadm init --config kubeadm.yaml
初始化成功返回结果如下:
……Your Kubernetes control-plane has initialized successfully!To start using your cluster, you need to run the following as a regular user:mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/configAlternatively, if you are the root user, you can run:export KUBECONFIG=/etc/kubernetes/admin.confYou should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/Then you can join any number of worker nodes by running the following on each as root:kubeadm join 192.168.100.20:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cebc76018abc850a5b948d34e35fe34b021e92ecc65299……
按照提示在 master 上执行如下命令:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
查看 node 。此时 node 状态应该是 NotReady,因为还没配置网络插件
[root@k8s-master k8s-install]# kubectl get node
NAMESTATUSROLESAGEVERSION
k8s-masterNotReadycontrol-plane,master16mv1.23.4
通过 kubectl describe node k8s-master 命令也能查看到网络未准备好
[root@k8s-master k8s-install]# kubectl describe node k8s-master | grep network
ReadyFalseFri, 18 Feb 2022 16:06:40 +0800Fri, 18 Feb 2022 15:35:44 +0800KubeletNotReadycontainer runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
顺便说明:
排查问题,最重要的手段就是用 kubectl describe 来查看这个节点(Node)对象的详细信息、状态和事件(Event)
13. 将 slave 加入到集群中(在所有 slave 执行)
kubeadm join 192.168.100.20:6443 --token abcdef.0123456789abcdef \
--discovery-token-ca-cert-hash sha256:cebc76018abc850a5b948d34e35fe34b021e92ecc65299……
若加入成功,slave 返回结果如下:
[preflight] Running pre-flight checks
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -o yaml'
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Starting the kubelet
[kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...This node has joined the cluster:
* Certificate signing request was sent to apiserver and a response was received.
* The Kubelet was informed of the new secure connection details.Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
再次查看 node
[root@k8s-master k8s-install]# kubectl get node
NAMESTATUSROLESAGEVERSION
k8s-masterNotReadycontrol-plane,master24mv1.23.4
k8s-slave1NotReady97sv1.23.4
k8s-slave2NotReady102sv1.23.4
查看 pod,从输出中可以看到,coredns 的 Pod 处于 Pending 状态,即调度失败,当然也符合预期,因为这个 Master 节点的网络尚未就绪
[root@k8s-master k8s-install]# kubectl get pods -n kube-system
NAMEREADYSTATUSRESTARTSAGE
coredns-6d8c4cb4d-jrbck0/1Pending033m
coredns-6d8c4cb4d-wx5j60/1Pending033m
etcd-k8s-master1/1Running034m
kube-apiserver-k8s-master1/1Running034m
kube-controller-manager-k8s-master1/1Running034m
kube-proxy-57h471/1Running033m
kube-proxy-q55pv1/1Running011m
kube-proxy-qs57s1/1Running011m
kube-scheduler-k8s-master1/1Running034m
14. 安装 flannel(仅在 master 执行) 由于还要进行一些配置上的修改,所以不建议直接执行:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
而是先下载 yml 文件,修改之后再执行:
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
修改 kube-flannel.yml,在第 200 行左右增加指定网卡的配置:- --iface=XXX
---
apiVersion: policy/v1beta1
kind: PodSecurityPolicy
metadata:
name: psp.flannel.unprivileged
annotations:
seccomp.security.alpha.kubernetes.io/allowedProfileNames: docker/default
seccomp.security.alpha.kubernetes.io/defaultProfileName: docker/default
apparmor.security.beta.kubernetes.io/allowedProfileNames: runtime/default
apparmor.security.beta.kubernetes.io/defaultProfileName: runtime/default
spec:
privileged: false
volumes:
- configMap
- secret
- emptyDir
- hostPath
allowedHostPaths:
- pathPrefix: "/etc/cni/net.d"
- pathPrefix: "/etc/kube-flannel"
- pathPrefix: "/run/flannel"
readOnlyRootFilesystem: false
# Users and groups
runAsUser:
rule: RunAsAny
supplementalGroups:
rule: RunAsAny
fsGroup:
rule: RunAsAny
# Privilege Escalation
allowPrivilegeEscalation: false
defaultAllowPrivilegeEscalation: false
# Capabilities
allowedCapabilities: ['NET_ADMIN', 'NET_RAW']
defaultAddCapabilities: []
requiredDropCapabilities: []
# Host namespaces
hostPID: false
hostIPC: false
hostNetwork: true
hostPorts:
- min: 0
max: 65535
# SELinux
seLinux:
# SELinux is unused in CaaSP
rule: 'RunAsAny'
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
rules:
- apiGroups: ['extensions']
resources: ['podsecuritypolicies']
verbs: ['use']
resourceNames: ['psp.flannel.unprivileged']
- apiGroups:
- ""
resources:
- pods
verbs:
- get
- apiGroups:
- ""
resources:
- nodes
verbs:
- list
- watch
- apiGroups:
- ""
resources:
- nodes/status
verbs:
- patch
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
name: flannel
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: flannel
subjects:
- kind: ServiceAccount
name: flannel
namespace: kube-system
---
apiVersion: v1
kind: ServiceAccount
metadata:
name: flannel
namespace: kube-system
---
kind: ConfigMap
apiVersion: v1
metadata:
name: kube-flannel-cfg
namespace: kube-system
labels:
tier: node
app: flannel
data:
cni-conf.json: |
{
"name": "cbr0",
"cniVersion": "0.3.1",
"plugins": [
{
"type": "flannel",
"delegate": {
"hairpinMode": true,
"isDefaultGateway": true
}
},
{
"type": "portmap",
"capabilities": {
"portMappings": true
}
}
]
}
net-conf.json: |
{
"Network": "10.244.0.0/16",
"Backend": {
"Type": "vxlan"
}
}
---
apiVersion: apps/v1
kind: DaemonSet
metadata:
name: kube-flannel-ds
namespace: kube-system
labels:
tier: node
app: flannel
spec:
selector:
matchLabels:
app: flannel
template:
metadata:
labels:
tier: node
app: flannel
spec:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/os
operator: In
values:
- linux
hostNetwork: true
priorityClassName: system-node-critical
tolerations:
- operator: Exists
effect: NoSchedule
serviceAccountName: flannel
initContainers:
- name: install-cni-plugin
#image: flannelcni/flannel-cni-plugin:v1.0.1 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel-cni-plugin:v1.0.1
command:
- cp
args:
- -f
- /flannel
- /opt/cni/bin/flannel
volumeMounts:
- name: cni-plugin
mountPath: /opt/cni/bin
- name: install-cni
#image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.16.3
command:
- cp
args:
- -f
- /etc/kube-flannel/cni-conf.json
- /etc/cni/net.d/10-flannel.conflist
volumeMounts:
- name: cni
mountPath: /etc/cni/net.d
- name: flannel-cfg
mountPath: /etc/kube-flannel/
containers:
- name: kube-flannel
#image: flannelcni/flannel:v0.16.3 for ppc64le and mips64le (dockerhub limitations may apply)
image: rancher/mirrored-flannelcni-flannel:v0.16.3
command:
- /opt/bin/flanneld
args:
- --ip-masq
- --kube-subnet-mgr
- --iface=ens33 # 如果机器存在多网卡的话,最好指定内网网卡的名称。CentOS 下一般是 eth0 或者 ens33 。不指定的话默认会使用第一块网卡。
resources:
requests:
cpu: "100m"
memory: "50Mi"
limits:
cpu: "100m"
memory: "50Mi"
securityContext:
privileged: false
capabilities:
add: ["NET_ADMIN", "NET_RAW"]
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
volumeMounts:
- name: run
mountPath: /run/flannel
- name: flannel-cfg
mountPath: /etc/kube-flannel/
- name: xtables-lock
mountPath: /run/xtables.lock
volumes:
- name: run
hostPath:
path: /run/flannel
- name: cni-plugin
hostPath:
path: /opt/cni/bin
- name: cni
hostPath:
path: /etc/cni/net.d
- name: flannel-cfg
configMap:
name: kube-flannel-cfg
- name: xtables-lock
hostPath:
path: /run/xtables.lock
type: FileOrCreate
执行命令安装 flannel
由于是全新安装,所以使用 kubectl create 或者 kubectl apply 都可以。注意二者的区别!
[root@k8s-master k8s-install]# kubectl create -f kube-flannel.yml
Warning: policy/v1beta1 PodSecurityPolicy is deprecated in v1.21+, unavailable in v1.25+
podsecuritypolicy.policy/psp.flannel.unprivileged created
clusterrole.rbac.authorization.k8s.io/flannel created
clusterrolebinding.rbac.authorization.k8s.io/flannel created
serviceaccount/flannel created
configmap/kube-flannel-cfg created
daemonset.apps/kube-flannel-ds created
再次查看 pod ,coredns 的状态已经变为 Running
[root@k8s-master k8s-install]# kubectl get pods -n kube-system
NAMEREADYSTATUSRESTARTSAGE
coredns-6d8c4cb4d-jrbck1/1Running083m
coredns-6d8c4cb4d-wx5j61/1Running083m
etcd-k8s-master1/1Running083m
kube-apiserver-k8s-master1/1Running083m
kube-controller-manager-k8s-master1/1Running083m
kube-flannel-ds-6zvv21/1Running04m53s
kube-flannel-ds-lmthg1/1Running04m53s
kube-flannel-ds-p96n91/1Running04m53s
kube-proxy-57h471/1Running083m
kube-proxy-q55pv1/1Running060m
kube-proxy-qs57s1/1Running060m
kube-scheduler-k8s-master1/1Running083m
再次查看 node,状态也变为 Ready
[root@k8s-master k8s-install]# kubectl get node
NAMESTATUSROLESAGEVERSION
k8s-masterReadycontrol-plane,master84mv1.23.4
k8s-slave1Ready61mv1.23.4
k8s-slave2Ready61mv1.23.4
k8s 集群部署完毕!
15. 关于 ROLES(角色) 和 LABELS(标签)的设置 查看角色
[root@k8s-master k8s-install]# kubectl get node -o wide
NAMESTATUSROLESAGEVERSIONINTERNAL-IPEXTERNAL-IPOS-IMAGEKERNEL-VERSIONCONTAINER-RUNTIME
k8s-masterReadycontrol-plane,master119mv1.23.4192.168.100.20CentOS Linux 7 (Core)3.10.0-1160.53.1.el7.x86_64docker://20.10.12
k8s-slave1Ready96mv1.23.4192.168.100.21CentOS Linux 7 (Core)3.10.0-1160.53.1.el7.x86_64docker://20.10.12
k8s-slave2Ready97mv1.23.4192.168.100.22CentOS Linux 7 (Core)3.10.0-1160.53.1.el7.x86_64docker://20.10.12
查看标签
[root@k8s-master k8s-install]# kubectl get node --show-labels
NAMESTATUSROLESAGEVERSIONLABELS
k8s-masterReadycontrol-plane,master120mv1.23.4beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-master,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
k8s-slave1Ready97mv1.23.4beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-slave1,kubernetes.io/os=linux
k8s-slave2Ready97mv1.23.4beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=k8s-slave2,kubernetes.io/os=linux
k8s slave 节点的 ROLES 默认为
kubectl label no k8s-slave1 kubernetes.io/role=test-node
参数说明:
- kubernetes.io/role 后面跟的是 ROLES 的值
kubectl label no k8s-slave1 roles=dev-pc
注意:
- 设置的 ROLES 之后,可以同时在 ROLES 和 LABELS 两列看见设置的值,例如上面案例中的 test-node
- 但是设置 LABELS,只能在 LABELS 列中显示该值,例如上面案例中的 dev-pc
- 参考资料:https://blog.csdn.net/Lingoes...
请注意 !!!部署成功后,默认 master 节点无法调度业务 pod ,即 业务 pod 不可运行于 master 之上 !!!
详情说明:
- 使用 kubeadm 部署的 kubernetes 集群,其 master 节点默认拒绝将 pod 调度运行于其上的,加点官方的术语就是:master 默认被赋予了一个或者多个“污点(taints)”,“污点”的作用是让该节点拒绝将 pod 调度运行于其上。那么存在某些情况,比如测试环境资源不足,想让 master 也成为工作节点可以调度pod运行怎么办呢?两种方式:(1)去掉“污点”(taints)【生产环境不推荐】;(2)让 pod 能够容忍(tolerations)该节点上的“污点”。
[root@k8s-master k8s-install]# kubectl describe nodes k8s-master | grep Taints
Taints:node-role.kubernetes.io/master:NoSchedule
如果想让业务 pod 可以运行在 master 之上,可以执行如下命令(去掉“污点”)来实现(不推荐!!!):
kubectl taint node k8s-master node-role.kubernetes.io/master:NoSchedule-
【K8S|K8S 笔记 - 使用 kubeadm 安装 k8s】若执行以上命令,再次查看 kubectl describe 结果应如下:
[root@k8s-master k8s-install]# kubectl describe nodes k8s-master | grep Taints
Taints:
推荐阅读
- 【小程序开发笔记】微信小程序的JS基础
- 使用Visual|使用Visual Studio编写单元测试
- VUE|VUE this.$nextTick()的使用场景
- vue.js|在vue3中使用vuex 4.x
- uni-app|Vue.js中this.$nextTick()的使用
- vue|前端合并单元格,一看就会
- vuex|vue3中使用vuex4
- vue3|vue3 setup语法糖中组件之间通讯以及vuex的使用以及数据监听,路由跳转传参等基础知识点
- C++编程学习指导|vector类的使用介绍及模拟实现
- 基于SqlSessionFactory的openSession方法使用