kubernetes/k8s|kubernetes/k8s多节点部署之etcd存储的部署以及flannel网络配置的部署


文章目录

    • k8s多节点部署之etcd存储的部署
      • 一、项目需求分析:
      • 二、项目步骤部署(master节点):
        • 【1】下载证书制作工具
        • 【2】定义ca证书
        • 【3】实现证书签名
        • 【4】生成证书
        • 【5】指定etcd三个节点之间的通信验证
        • 【6】生成etcd证书server端
        • 【7】etcd二进制上传
        • 【8】创建配置文件,命令文件,证书
        • 【9】使用另一个终端复制证书和systemctl管理服务脚本到其他节点
        • 【10】在另外两个节点修改cfg下的配置文件
        • 【11】检查群集状态是否健康
    • k8s多节点部署之flannel网络配置的部署
      • 一、项目需求分析:
      • 二、项目步骤部署:
        • 【1】写入分配的子网段到etcd中,供flannel使用
        • 【2】查看写入的信息
        • 【3】在所有node节点上面部署flannel组件
        • 【4】创建k8s工作目录,拷贝命令文件
        • 【5】编写flannel组件启动执行脚本【node节点都一样】
        • 【6】开启flannel组件网络功能
        • 【7】配置docker连接flannel组件【所有node节点都一样】
        • 【8】查看bip指定启动时的子网
        • 【9】重新启动docker服务
        • 【10】查看flannel网络
        • 【11】测试node节点之间的连通性

k8s多节点部署之etcd存储的部署 一、项目需求分析:
【1】192.168.60.10是master节点kube-apiserver kube-controller-manager kube-scheduler etcd
【2】192.168.60.100是node1节点kubelet kube-proxy docker flannel etcd
【3】192.168.60.60是node2节点kubelet kube-proxy docker flannel etcd
二、项目步骤部署(master节点):
//master主节点配置
【1】下载证书制作工具
[root@localhost ~]# hostnamectl set-hostname master [root@localhost ~]# su [root@master ~]# cd /usr/local/bin [root@master bin]# chmod +x * [root@master bin]# ls cfsslcfssl-certinfocfssljson

【2】定义ca证书
[root@master ~]#mkdir -p k8s/etcd-cert [root@master etcd-cert]#cat > ca-config.json <

【3】实现证书签名
[root@master etcd-cert]#cat > ca-csr.json <

【4】生成证书
[root@master etcd-cert]#cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

【5】指定etcd三个节点之间的通信验证
[root@master etcd-cert]#cat > server-csr.json <

【6】生成etcd证书server端
[root@master etcd-cert]#cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server

【7】etcd二进制上传
[root@master k8s]# ls etcd-certetcd-v3.3.10-linux-amd64etcd-v3.3.10-linux-amd64.tar.gz

【8】创建配置文件,命令文件,证书
[root@master k8s]# mkdir -p /opt/etcd/{cfg,bin,ssl} //命令文件 [root@master k8s]# cp etcd-v3.3.10-linux-amd64/etcd etcd-v3.3.10-linux-amd64/etcdctl /opt/etcd/bin/ //证书 [root@master k8s]# cp etcd-cert/*.pem /opt/etcd/ssl/ //上传etcd.sh脚本,配置文件的生成以及systemctl管理服务文件生成 [root@master k8s]# ls etcd-certetcd.shetcd-v3.3.10-linux-amd64etcd-v3.3.10-linux-amd64.tar.gz [root@master k8s]#sh etcd.sh etcd01 192.168.60.10 etcd02=https://192.168.60.60:2380,etcd03=https://192.168.60.100:2380 //查看etcd的进程是否启动 [root@master ~]# ps -ef | grep etcd

【9】使用另一个终端复制证书和systemctl管理服务脚本到其他节点
[root@master ~]# scp -r /opt/etcd/ root@192.168.60.60:/opt/ [root@master ~]# scp -r /opt/etcd/ root@192.168.60.100:/opt/ //启动脚本拷贝到其他节点 scp /usr/lib/systemd/system/etcd.service root@192.168.60.60:/usr/lib/systemd/system/ scp /usr/lib/systemd/system/etcd.service root@192.168.60.100:/usr/lib/systemd/system/

【10】在另外两个节点修改cfg下的配置文件
//在192.168.60.60节点修改,主要是修改name和IP地址 [root@node1 ~]# cd /opt/etcd/cfg/ [root@node1 cfg]# ls etcd [root@node1 cfg]# vim etcd #[Member] ETCD_NAME="etcd02" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.60.60:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.60.60:2379"#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.60.60:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.60.60:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.60.10:2380,etcd02=https://192.168.60.60:2380,etcd03=https://192.168.60.100:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@node1 cfg]# systemctl start etcd.service [root@node1 cfg]# systemctl status etcd.service//在192.168.60.100节点修改,主要是修改name和IP地址 [root@node2 ~]# cd /opt/etcd/cfg/ [root@node2 cfg]# ls etcd [root@node2 cfg]# vim etcd #[Member] ETCD_NAME="etcd03" ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_PEER_URLS="https://192.168.60.100:2380" ETCD_LISTEN_CLIENT_URLS="https://192.168.60.100:2379"#[Clustering] ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.60.100:2380" ETCD_ADVERTISE_CLIENT_URLS="https://192.168.60.100:2379" ETCD_INITIAL_CLUSTER="etcd01=https://192.168.60.10:2380,etcd02=https://192.168.60.60:2380,etcd03=https://192.168.60.100:2380" ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster" ETCD_INITIAL_CLUSTER_STATE="new" [root@node2 cfg]# systemctl start etcd.service [root@node2 cfg]# systemctl status etcd.service

【11】检查群集状态是否健康
[root@master etcd-cert]# /opt/etcd//bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoint="https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379" cluster-health member 59173e3f8aecc6c3 is healthy: got healthy result from https://192.168.60.100:2379 member 8da25ad72397ec6e is healthy: got healthy result from https://192.168.60.10:2379 member a21e580b9191cb20 is healthy: got healthy result from https://192.168.60.60:2379 cluster is healthy [root@master etcd-cert]#

————————————————————————————————————————
k8s多节点部署之flannel网络配置的部署 一、项目需求分析:
【1】192.168.60.10是master节点kube-apiserver kube-controller-manager kube-scheduler etcd
【2】192.168.60.100是node1节点kubelet kube-proxy docker flannel etcd
【3】192.168.60.60是node2节点kubelet kube-proxy docker flannel etcd
二、项目步骤部署:
【1】写入分配的子网段到etcd中,供flannel使用
[root@master etcd-cert]# /opt/etcd/bin/etcdctl \ --ca-file=ca.pem \ --cert-file=server.pem \ --key-file=server-key.pem \ --endpoint="https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379" \ set /coreos.com/network/config '{"Network":"172.17.0.0/16","Backenf":{"Type":"vxlan"}}'

【2】查看写入的信息
[root@master etcd-cert]# /opt/etcd/bin/etcdctl \ --ca-file=ca.pem \ --cert-file=server.pem \ --key-file=server-key.pem \ --endpoint="https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379" \ get /coreos.com/network/config

【3】在所有node节点上面部署flannel组件
//在192.168.60.60节点 [root@node1 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz flanneld mk-docker-opts.sh README.md //在192.168.60.100节点 [root@node2 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz flanneld mk-docker-opts.sh README.md

【4】创建k8s工作目录,拷贝命令文件
//在192.168.60.60节点下 [root@node1 ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl} [root@node1 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/ //在192.168.60.100节点下 [root@node2 ~]# mkdir -p /opt/kubernetes/{cfg,bin,ssl} [root@node2 ~]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin/

【5】编写flannel组件启动执行脚本【node节点都一样】
[root@node1 ~]# vim flannel.sh #!/bin/bash ETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"} cat </opt/kubernetes/cfg/flanneld FLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \ -etcd-cafile=/opt/etcd/ssl/ca.pem \ -etcd-certfile=/opt/etcd/ssl/server.pem \ -etcd-keyfile=/opt/etcd/ssl/server-key.pem" EOFcat </usr/lib/systemd/system/flanneld.service [Unit] Description=Flanneld overlay address etcd agent After=network-online.target network.target Before=docker.service [Service] Type=notify EnvironmentFile=/opt/kubernetes/cfg/flanneld ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env Restart=on-failure [Install] WantedBy=multi-user.target EOFsystemctl daemon-reload systemctl enable flanneld systemctl restart flanneld

【6】开启flannel组件网络功能
[root@node1 ~]# sh flannel.sh https://192.168.60.10:2379,https://192.168.60.60:2379,https://192.168.60.100:2379

【7】配置docker连接flannel组件【所有node节点都一样】
[root@node1 ~]# vim /usr/lib/systemd/system/docker.service 14 EnvironmentFile=/run/flannel/subnet.env 15 ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock

【8】查看bip指定启动时的子网
//在192.168.60.60节点node1 [root@node1 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.39.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1472" DOCKER_NETWORK_OPTIONS=" --bip=172.17.39.1/24 --ip-masq=false --mtu=1472" //在192.168.60.100节点node2 [root@node2 ~]# cat /run/flannel/subnet.env DOCKER_OPT_BIP="--bip=172.17.85.1/24" DOCKER_OPT_IPMASQ="--ip-masq=false" DOCKER_OPT_MTU="--mtu=1472" DOCKER_NETWORK_OPTIONS=" --bip=172.17.85.1/24 --ip-masq=false --mtu=1472"

【9】重新启动docker服务
[root@node1 ~]# systemctl daemon-reload [root@node1 ~]# systemctl restart docker.service

【10】查看flannel网络
//在node1节点192.168.60.60 [root@node1 ~]# ifconfig docker0: flags=4099mtu 1500 inet 172.17.39.1netmask 255.255.255.0broadcast 172.17.39.255 ether 02:42:b1:19:5b:a1txqueuelen 0(Ethernet) RX packets 0bytes 0 (0.0 B) RX errors 0dropped 0overruns 0frame 0 TX packets 0bytes 0 (0.0 B) TX errors 0dropped 0 overruns 0carrier 0collisions 0 //在node2节点192.168.60.100 [root@node2 ~]# ifconfig docker0: flags=4099mtu 1500 inet 172.17.85.1netmask 255.255.255.0broadcast 172.17.85.255 ether 02:42:b5:54:91:f1txqueuelen 0(Ethernet) RX packets 0bytes 0 (0.0 B) RX errors 0dropped 0overruns 0frame 0 TX packets 0bytes 0 (0.0 B) TX errors 0dropped 0 overruns 0carrier 0collisions 0

【11】测试node节点之间的连通性 //在192.168.60.60节点
[root@node1 ~]# docker run -it centos:7 /bin/bash [root@2bbac9ebdc96 /]# yum install -y net-tools [root@2bbac9ebdc96 /]# ifconfig eth0: flags=4163mtu 1472 inet 172.17.39.2netmask 255.255.255.0broadcast 172.17.39.255 ether 02:42:ac:11:27:02txqueuelen 0(Ethernet) RX packets 15198bytes 12444271 (11.8 MiB) RX errors 0dropped 0overruns 0frame 0 TX packets 7322bytes 398889 (389.5 KiB) TX errors 0dropped 0 overruns 0carrier 0collisions 0 [root@2bbac9ebdc96 /]# ping 172.17.85.2 PING 172.17.85.2 (172.17.85.2) 56(84) bytes of data. 64 bytes from 172.17.85.2: icmp_seq=1 ttl=60 time=1.08 ms 64 bytes from 172.17.85.2: icmp_seq=2 ttl=60 time=0.523 ms 64 bytes from 172.17.85.2: icmp_seq=3 ttl=60 time=0.619 ms 64 bytes from 172.17.85.2: icmp_seq=4 ttl=60 time=2.24 ms

【kubernetes/k8s|kubernetes/k8s多节点部署之etcd存储的部署以及flannel网络配置的部署】//在192.168.60.100节点
[root@node2 ~]# docker run -it centos:7 /bin/bash [root@79995e04b320 /]# yum install -y net-tools [root@79995e04b320 /]# ifconfig eth0: flags=4163mtu 1472 inet 172.17.85.2netmask 255.255.255.0broadcast 172.17.85.255 ether 02:42:ac:11:55:02txqueuelen 0(Ethernet) RX packets 15299bytes 12447552 (11.8 MiB) RX errors 0dropped 0overruns 0frame 0 TX packets 5864bytes 320081 (312.5 KiB) TX errors 0dropped 0 overruns 0carrier 0collisions 0 [root@79995e04b320 /]# ping 172.17.39.2 PING 172.17.39.2 (172.17.39.2) 56(84) bytes of data. 64 bytes from 172.17.39.2: icmp_seq=1 ttl=60 time=0.706 ms 64 bytes from 172.17.39.2: icmp_seq=2 ttl=60 time=0.491 ms 64 bytes from 172.17.39.2: icmp_seq=3 ttl=60 time=0.486 ms 64 bytes from 172.17.39.2: icmp_seq=4 ttl=60 time=0.528 ms

    推荐阅读