文章目录
- 一、Kubernetes
-
- 1. Kubernetes概述
- 2. Kubernetes特性
- 3. Kubernetes核心概念
- 二、Kubernetes集群架构和组件
-
- 1. Master组件
- 2. Node组件
- 三、Flannel
-
- 1. Flannel概述
- 2. Flannel核心概念
- 3. VXLAN模式
- 四、kubernetes部署示例
-
- 1.案例环境
- 2.配置步骤
-
- etcd部署
- master部署
- Node节点部署
- master02部署
- 负载均衡部署
一、Kubernetes 1. Kubernetes概述 ??Kubernetes是Google在2014年开源的一个容器集群管理系统,简称k8s。k8s用于容器化应用程序的部署、拓展和管理。k8s提供容器编排、资源调度、弹性伸缩、部署管理、服务发现等一系列功能。使部署容器化应用更加简单高效。
- 容器编排:类似compose,可批量创建镜像与容器。
- 资源调度:可由系统自动分配或是由自己指定。
- 弹性伸缩:在和虚拟机的对比中,谈到过容器的启动时间是ms级别的,所以适合在短时间内创建大量容器部署相应服务。
- 部署管理:对资源状态的指定,有五种状态,其中的无状态和有状态为重点。
- 服务发现:ETCD类似数据库,具有服务发现功能,在其中会记录着对容器的相关操作及大量数据,由apiserver控制;在生产环境中需要至少三台的备份。
- 自我修复
在节点故障时重新启动失败的容器,替换和重新部署,保证预期的副本数量;杀死健康检查失败的容器,并且在为准备好之前不会处理客户端的请求,确保线上服务不中断。
- 弹性伸缩
使用指令、UI或基于CPU等资源的使用情况,自动快速扩容和缩容应用实例,应对业务高峰时的高并发和业务低峰时的回收资源,节约资源成本。
- 自动部署和回滚
k8s采用滚动更新机制更新应用,一次更新一个pod(一般对应一个docker),而不是同时删除所有pod,如果更新过程中出现问题,将回滚更改,确保升级不受影响。
- 服务发现和负载均衡
k8s为多个容器提供一个统一访问的入口(内部IP地址和一个DNS名称),并且负载均衡关联到所以容器,使用户无需考虑容器IP的问题。
- 机密和配置管理
管理机密数据和应用程序配置,而不需要把敏感数据暴露在镜像里,增强数据的安全性。并可以将一些常用的配置存储在k8s中,方便应用程序使用。
- 存储编排
挂载外部存储系统,无论是来自本地存储、公有云(如AWS)、还是网络存储(NFS、GlusterFS、Ceph)都作为集群资源的一部分使用,极大提高了存储使用的灵活性。
- 批处理
提供一次性任务,定时任务;满足批量数据处理和分析的场景。
说明:
滚动更新:用新容器逐个替换旧容器 因为逐个替换是一个过程, 所以会出现访问到新旧容器的情况
蓝绿部署:两大可用区,轮流 不停歇,就像海豚的的左右脑,一边休眠一边工作
灰度部署:就是上面所说的可用区滚动更新,由新容器逐个替换旧容器,从头到尾再从头到尾,不停歇
- Pod
pod是k8s最小部署单元,是一组容器的集合,一个pod中的容器共享网络命名空间,pod是短暂的。
- Controllers
ReplicaSet:确保预期的Pod副本数量
Deployment:无状态应用部署
StatefulSet:有状态应用部署
DaemonSet:确保所有Node运行在同一个Pod
Job:一次性任务
Cronjob:定时任务
更高级层次对象,部署和管理Pod
- Service
防止Pod失联
定义一组Pod的访问策略,若容器服务部署完成,没有service,用户无法访问,就像端口映射,需要提供访问的端口一样
- Label:标签,附加到某个资源上,用于关联对象、查询和筛选
- Namespaces:命名空间,将对象逻辑上隔离
- Annotations:注释
- kube-apiserver
Kubernetes API,做为集群统一入口,各组件的协调者,以RESTful API提供接口服务,所有对象资源的增删改查和监听操作都交给APIServer处理后再提交给Etcd存储。
- kube-controller-manager
处理集群中常规后台任务,一个资源对应一个控制器,而ControllerManager就是负责管理这些控制器的
- kube-scheduler
根据调度算法为新创建的Pod选择一个Node节点,可以任意部署,可以部署在同一个节点上,也可以部署在不同的节点上。
需要说明的是:不是所有的服务分配都需要经过scheduler,当指定过服务部署在特定的节点上的时候就可以不用经过scheduler分配
- etcd
分布式键值存储系统。用于保存集群状态数据,比如Pod、service等对象信息。
- kubelet
kubelet是Master在Node节点上的Agent,管理本机运行容器的声明周期,比如创建容器、Pod挂载数据卷、下载secret、获取容器和节点状态等工作。kubelet将每个Pod转换成一组容器。
- kube-proxy
在Node节点上实现Pod网络代理,维护网络规则和四次负载均衡工作,客户端的入口。
- docker或rocket
容器引擎,运行容器。
文章图片
??fannel会在寄主机上创建一个flannel.1或类似的网卡设备和创建一系列路由表规则。
2. Flannel核心概念
- Overlay Network
覆盖网络,在基础网络的基础上叠加的一种虚拟网络技术,该网络中的主机可以通过虚拟链路连接起来。
- VXLAN
- Flannel
是Overlay网络的一种,也是将源数据包封装在另一种网络包中进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE等数据转发方式。
- flannel0设备
负责在操作系统内核和用户应用程序之间传递IP包
- 内核态向用户态流动
当操作系统将一个IP包发送给flannel0设备后,flannel0设备就会把这个IP包交给创建这个设备的应用程序,也就是flannel进程
- 用户态向内核流动
flannel进程向flannel0设备发送一个IP包,这个IP包就会出现在宿主机的网络栈中,然后根据宿主机上的路由表规则处理.
- 内核态向用户态流动
- 【docker|Kubernetes基础知识与k8s多节点案例配置】Flannel子网
每台宿主机都会被flannel分配一个单独的子网段,每台宿主机上的所有容器地址都是这个子网段中的IP地址,子网信息和宿主机的对应关系都保存在etcd中,必须建立docker0和flannel的关系,这样docker0网桥的地址范围将会变成flannel的子网范围。
文章图片
文章图片
四、kubernetes部署示例 1.案例环境 负载均衡
- Nginx01:20.0.0.50/24
- Nginx02:20.0.0.60/24
- master01:20.0.0.10/24
- master02:20.0.0.20/24
- node01:20.0.0.30/24
- node02:20.0.0.40/24
etcd证书
由于此部分生成证书和相关文件较多,而且一些指令中证书文件用的是相对路径,所以要注意执行时所在的工作目录
[root@localhost ~]# hostnamectl set-hostname master
[root@localhost ~]# su
[root@master ~]# mkdir k8s
[root@master ~]# cd k8s/
[root@master k8s]# rz -E
rz waiting to receive.
//这两个文件是写好的,etcd-cert.sh里面是ca证书制作的一些代码后面可以拿出来用的
//etcd.sh是etcd的配置文件和启动脚本,后面需要直接执行的
[root@master k8s]# ls
etcd-cert.shetcd.sh
[root@master k8s]# mkdir etcd-cert
[root@master k8s]# mv etcd-cert.sh etcd-cert
[root@master k8s]# cd /usr/local/bin
[root@master bin]# rz -E
rz waiting to receive.
//这几个是证书制作的工具需要下载的
//下载地址:https://pkg.cfssl.org/R1.2/cfssl_linux-amd64,https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64,https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
//cfssl是生成证书工具,cfssljson是通过传入json文件来生成证书,cfssl-certinfo查看证书信息
[root@master bin]# ls
cfsslcfssl-certinfocfssljson
[root@master bin]# chmod +x *
[root@master bin]# cd
[root@master ~]# cd k8s/
[root@master k8s]# ls
etcd-certetcd.sh
//ca证书定义,在etcd-cert.sh中有代码
[root@master k8s]# cat > ca-config.json < ca-config.json <
//ca证书签名
[root@master k8s]# cat > ca-csr.json <
//生成ca秘钥和证书
[root@master k8s]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/28 16:17:31 [INFO] generating a new CA key and certificate from CSR
2020/09/28 16:17:31 [INFO] generate received request
2020/09/28 16:17:31 [INFO] received CSR
2020/09/28 16:17:31 [INFO] generating key: rsa-2048
2020/09/28 16:17:31 [INFO] encoded CSR
2020/09/28 16:17:31 [INFO] signed certificate with serial number 225437059867776436062700610309289006313657657183
[root@master k8s]# ls
ca-config.jsonca.csrca-csr.jsonca-key.pemca.pemetcd-certetcd.sh
//服务端证书,指定etcd群集成员
[root@master k8s]# cat > server-csr.json <
//生成ETCD证书 server-key.pem server.pem
[root@master k8s]# ls
ca-config.jsonca-csr.jsonca.pemetcd.shserver-csr.jsonserver.pem
ca.csrca-key.pemetcd-certserver.csrserver-key.pem
[root@master k8s]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json | cfssljson -bare server
2020/09/28 16:20:51 [INFO] generate received request
2020/09/28 16:20:51 [INFO] received CSR
2020/09/28 16:20:51 [INFO] generating key: rsa-2048
2020/09/28 16:20:51 [INFO] encoded CSR
2020/09/28 16:20:51 [INFO] signed certificate with serial number 692180165096155002840320772719909924938206748479
2020/09/28 16:20:51 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s]# ls
ca-config.jsonca-csr.jsonca.pemetcd.shserver-csr.jsonserver.pem
ca.csrca-key.pemetcd-certserver.csrserver-key.pem[root@master k8s]# mv ca* etcd-cert/
[root@master k8s]# ls
etcd-certetcd.shserver.csrserver-csr.jsonserver-key.pemserver.pem
[root@master k8s]# mv server* etcd-cert/
[root@master k8s]# ls
etcd-certetcd.sh
安装etcd
[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# tar zxvf etcd-v3.3.10-linux-amd64.tar.gz
[root@master k8s]# cd etcd-v3.3.10-linux-amd64/
[root@master etcd-v3.3.10-linux-amd64]# ls
DocumentationetcdetcdctlREADME-etcdctl.mdREADME.mdREADMEv2-etcdctl.md
//创建三个目录来同一存放我们刚刚生成或即将生成的文件,cfg中存放配置文件,bin下放指令,ssl放证书
[root@master etcd-v3.3.10-linux-amd64]# mkdir /opt/etcd/{cfg,bin,ssl} -p
[root@master etcd-v3.3.10-linux-amd64]# cp etcd /opt/etcd/bin/
[root@master etcd-v3.3.10-linux-amd64]# cp etcdctl /opt/etcd/bin/
[root@master etcd-v3.3.10-linux-amd64]# cd ../etcd-cert/
[root@master etcd-cert]# ls
ca-config.jsonca-csr.jsonca.pemserver.csrserver-key.pem
ca.csrca-key.pemetcd-cert.shserver-csr.jsonserver.pem
[root@master etcd-cert]# cp *.pem /opt/etcd/ssl/
[root@master etcd-cert]# cd ..
//因为还没有配置Node所以会进入卡着的状态,可以另开一个终端看看etcd的进程,要是Node节点的防火墙没关的话,或者规则没配的话,也会报错
//执行etcd.sh这个脚本,会生成etcd配置文件etcd,存放位置为/opt/etcd/cfg/;还会生成启动脚本etcd.service存放位置/usr/lib/systemed/system/
[root@master k8s]# bash etcd.sh etcd01 20.0.0.20 etcd02=https://20.0.0.30:2380,etcd03=https://20.0.0.40:2380
//拷贝证书去其他节点,因为加入群集是需要证书认证的
[root@master k8s]# scp -r /opt/etcd/ root@20.0.0.30:/opt/
[root@master k8s]# scp -r /opt/etcd/ root@20.0.0.40:/opt
//拷贝启动脚本其他节点
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@20.0.0.30:/usr/lib/systemd/system/
[root@master k8s]# scp /usr/lib/systemd/system/etcd.service root@20.0.0.40:/usr/lib/systemd/system/
//修改从master上修改拷贝过去的配置文件
node01上的配置文件中需要修改的配置如下
ETCD_NAME="etcd02"
ETCD_LISTEN_PEER_URLS="https://20.0.0.30:2380"
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.30:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.30:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.30:2379"
node02上的配置文件中需要修改的配置如下
ETCD_NAME="etcd03"
ETCD_LISTEN_PEER_URLS="https://20.0.0.40:2380"
ETCD_LISTEN_CLIENT_URLS="https://20.0.0.40:2379"
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://20.0.0.40:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://20.0.0.40:2379"
//在两node节点上启动
[root@node01 ssl]# systemctl start etcd
[root@node01 ssl]# systemctl enable etcd
//在master上检查集群健康状态
[root@master etcd-cert]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0.40:2379" cluster-health
member 60bc7e36f63b965 is healthy: got healthy result from https://20.0.0.30:2379
member 2cc2add1558dd1c9 is healthy: got healthy result from https://20.0.0.40:2379
member e3197fd6a5933614 is healthy: got healthy result from https://20.0.0.20:2379
cluster is healthy
//再次执行脚本即可完成etcd群集部署
[root@master k8s]# bash etcd.sh etcd01 20.0.0.20 etcd02=https://20.0.0.30:2380,etcd03=https://20.0.0.40:2380
Node节点docker部署
所有node节点部署docker引擎
Node节点flannel网络配置
网段分配
//写入分配的网段到etcd中,供flannel使用,因为要使用到相关证书,下列的命令没有使用绝对路径,所有你要在有相关证书的路径下执行
[root@node01 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.20,https://20.0.0.30:2379,https://20.0.0.40:2379" set /coreos.com/network/config '{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}'
输出以下信息
{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
//查看写入的信息,可在所有node节点中查看
[root@node02 ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://20.0.0.20,https://20.0.0.30,https://20.0.0.40:2379" get /coreos.com/network/config{ "Network": "172.17.0.0/16", "Backend": {"Type": "vxlan"}}
安装flannel
//所有Node节点准备 flannel-v0.10.0-linux-amd64.tar.gz安装包
[root@node01 ~]# ls
anaconda-ks.cfgflannel-v0.10.0-linux-amd64.tar.gz
//所有node节点解压
[root@lnode01 ~]# tar zxvf flannel-v0.10.0-linux-amd64.tar.gz
flanneld
mk-docker-opts.sh
README.md
//创建一个k8s的工作目录用来存放相关文件
[root@node01 ~]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p
[root@node01 ~]# mv mk-docker-opts.sh flanneld /opt/kubernetes/bin/
//编写flannel脚本,用来定义配置文件和启动脚本
//脚本中ETCD_ENDPOINTS中定义的地址,指向etcd,由于本次案例中不管是master还是node上都是etcd集群中的一员,所以填上了127.0.0.1
[root@node01 ~]# vim flannel.sh
#!/bin/bashETCD_ENDPOINTS=${1:-"http://127.0.0.1:2379"}cat </opt/kubernetes/cfg/flanneldFLANNEL_OPTIONS="--etcd-endpoints=${ETCD_ENDPOINTS} \
-etcd-cafile=/opt/etcd/ssl/ca.pem \
-etcd-certfile=/opt/etcd/ssl/server.pem \
-etcd-keyfile=/opt/etcd/ssl/server-key.pem"EOFcat </usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq \$FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure[Install]
WantedBy=multi-user.targetEOFsystemctl daemon-reload
systemctl enable flanneld
systemctl restart flanneld
//开启flannel网络功能
[root@lnode01 ~]# bash flannel.sh https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0.40:2379
//配置docker连接flannel
[root@node01 ~]# vim /usr/lib/systemd/system/docker.service[Service]
Type=notify
# the default is not to use systemd for cgroups because the delegate issues still
# exists and systemd currently does not support the cgroup feature set required
# for containers run by docker
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS -H fd:// --containerd=/run/containerd/containerd.sock
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always[root@node01 ~]# cat /run/flannel/subnet.env
DOCKER_OPT_BIP="--bip=172.17.3.1/24"
DOCKER_OPT_IPMASQ="--ip-masq=false"
DOCKER_OPT_MTU="--mtu=1450"
//之后docker0的地址就会变成172.17.3.1
DOCKER_NETWORK_OPTIONS=" --bip=172.17.3.1/24 --ip-masq=false --mtu=1450"
//重启docker服务
[root@node01 ~]# systemctl daemon-reload
[root@node01 ~]# systemctl restart docker
//查看flannel网络
[root@node01 ~]# ifconfig
docker0: flags=4099mtu 1500
inet 172.17.3.1netmask 255.255.255.0broadcast 172.17.3.255
inet6 fe80::42:a6ff:fedf:8b9dprefixlen 64scopeid 0x20
ether 02:42:a6:df:8b:9dtxqueuelen 0(Ethernet)
RX packets 7775bytes 314728 (307.3 KiB)
RX errors 0dropped 0overruns 0frame 0
TX packets 16038bytes 12470574 (11.8 MiB)
TX errors 0dropped 0 overruns 0carrier 0collisions 0flannel.1: flags=4163mtu 1450
inet 172.17.3.0netmask 255.255.255.255broadcast 0.0.0.0
inet6 fe80::740b:6dff:fe85:b995prefixlen 64scopeid 0x20
ether 76:0b:6d:85:b9:95txqueuelen 0(Ethernet)
RX packets 7bytes 588 (588.0 B)
...
在node01和node02配置时差不多的,配置完成后可以在两个node上都创建一个容器,进入容器查看网卡,后互ping,能通即可
master部署
证书准备
[root@master ~]# cd k8s/
[root@master k8s]# ls
etcd-certetcd.shetcd-v3.3.10-linux-amd64etcd-v3.3.10-linux-amd64.tar.gz
//创建目录存放后面生成的文件
[root@master k8s]# mkdir -p /opt/kubernetes/{cfg,bin,ssl}
//创建目录,存放master所需的证书和文件
[root@master k8s]# mkdir k8s-cert
[root@master k8s]# cd k8s-cert/
[root@master k8s-cert]# rz -E
rz waiting to receive.
[root@master k8s-cert]# ls
k8s-cert.sh
//ca证书定义
[root@master k8s-cert]# cat > ca-config.json < {
> "signing": {
>"default": {
>"expiry": "87600h"
>},
>"profiles": {
>"kubernetes": {
>"expiry": "87600h",
>"usages": [
>"signing",
>"key encipherment",
>"server auth",
>"client auth"
>]
>}
>}
> }
> }
> EOF
[root@master k8s-cert]# ls
ca-config.jsonk8s-cert.sh
//ca签名
[root@master k8s-cert]# cat > ca-csr.json < {
>"CN": "kubernetes",
>"key": {
>"algo": "rsa",
>"size": 2048
>},
>"names": [
>{
>"C": "CN",
>"L": "Beijing",
>"ST": "Beijing",
>"O": "k8s",
>"OU": "System"
>}
>]
> }
> EOF
> [root@master k8s-cert]# ls
> ca-config.jsonca-csr.jsonk8s-cert.sh
//ca秘钥与证书制作
[root@master k8s-cert]# cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2020/09/29 15:12:35 [INFO] generating a new CA key and certificate from CSR
2020/09/29 15:12:35 [INFO] generate received request
2020/09/29 15:12:35 [INFO] received CSR
2020/09/29 15:12:35 [INFO] generating key: rsa-2048
2020/09/29 15:12:35 [INFO] encoded CSR
2020/09/29 15:12:35 [INFO] signed certificate with serial number 3593449326719768921682602612991420656487961
[root@master k8s-cert]# ls
ca-config.jsonca.csrca-csr.jsonca-key.pemca.pemk8s-cert.sh
//服务器证书
[root@master k8s-cert]# cat > server-csr.json < {
>"CN": "kubernetes",
>"hosts": [
>"10.0.0.1",
>"127.0.0.1",
>"20.0.0.10",
>"20.0.0.20",
>"20.0.0.8",
>"20.0.0.50",
>"20.0.0.60",
>"kubernetes",
>"kubernetes.default",
>"kubernetes.default.svc",
>"kubernetes.default.svc.cluster",
>"kubernetes.default.svc.cluster.local"
>],
>"key": {
>"algo": "rsa",
>"size": 2048
>},
>"names": [
>{
>"C": "CN",
>"L": "BeiJing",
>"ST": "BeiJing",
>"O": "k8s",
>"OU": "System"
>}
>]
> }
> EOF
[root@master k8s-cert]# ls
ca-config.jsonca.csrca-csr.jsonca-key.pemca.pemk8s-cert.shserver-csr.json
//服务器证书
[root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kuber server-csr.json | cfssljson -bare server
2020/09/29 15:15:52 [INFO] generate received request
2020/09/29 15:15:52 [INFO] received CSR
2020/09/29 15:15:52 [INFO] generating key: rsa-2048
2020/09/29 15:15:53 [INFO] encoded CSR
2020/09/29 15:15:53 [INFO] signed certificate with serial number 4577775142977539456210654504476898126934909
2020/09/29 15:15:53 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls
ca-config.jsonca-csr.jsonca.pemserver.csrserver-key.pem
ca.csrca-key.pemk8s-cert.shserver-csr.jsonserver.pem
//
[root@master k8s-cert]# cat > admin-csr.json < {
> "CN": "admin",
> "hosts": [],
> "key": {
>"algo": "rsa",
>"size": 2048
> },
> "names": [
>{
>"C": "CN",
>"L": "BeiJing",
>"ST": "BeiJing",
>"O": "system:masters",
>"OU": "System"
>}
> ]
> }
> EOF
[root@master k8s-cert]# ls
admin-csr.jsonca.csrca-key.pemk8s-cert.shserver-csr.jsonserver.pem
ca-config.jsonca-csr.jsonca.pemserver.csrserver-key.pem
//
[root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kuber admin-csr.json | cfssljson -bare admin
2020/09/29 15:18:20 [INFO] generate received request
2020/09/29 15:18:20 [INFO] received CSR
2020/09/29 15:18:20 [INFO] generating key: rsa-2048
2020/09/29 15:18:20 [INFO] encoded CSR
2020/09/29 15:18:20 [INFO] signed certificate with serial number 6947870123681991616501552650764507059877403
2020/09/29 15:18:20 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls
admin.csradmin-key.pemca-config.jsonca-csr.jsonca.pemserver.csrserver-key.pem
admin-csr.jsonadmin.pemca.csrca-key.pemk8s-cert.shserver-csr.jsonserver.pem
//
[root@master k8s-cert]# cat > kube-proxy-csr.json < {
> "CN": "system:kube-proxy",
> "hosts": [],
> "key": {
>"algo": "rsa",
>"size": 2048
> },
> "names": [
>{
>"C": "CN",
>"L": "BeiJing",
>"ST": "BeiJing",
>"O": "k8s",
>"OU": "System"
>}
> ]
> }
> EOF
[root@master k8s-cert]# ls
admin.csradmin.pemca-csr.jsonk8s-cert.shserver-csr.json
admin-csr.jsonca-config.jsonca-key.pemkube-proxy-csr.jsonserver-key.pem
admin-key.pemca.csrca.pemserver.csrserver.pem
//
[root@master k8s-cert]# cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kuber kube-proxy-csr.json | cfssljson -bare kube-proxy
2020/09/29 15:19:06 [INFO] generate received request
2020/09/29 15:19:06 [INFO] received CSR
2020/09/29 15:19:06 [INFO] generating key: rsa-2048
2020/09/29 15:19:06 [INFO] encoded CSR
2020/09/29 15:19:06 [INFO] signed certificate with serial number 3574700643984106503033589912926081516191700
2020/09/29 15:19:06 [WARNING] This certificate lacks a "hosts" field. This makes it unsuitable for
websites. For more information see the Baseline Requirements for the Issuance and Management
of Publicly-Trusted Certificates, v.1.1.6, from the CA/Browser Forum (https://cabforum.org);
specifically, section 10.2.3 ("Information Requirements").
[root@master k8s-cert]# ls
admin.csradmin.pemca-csr.jsonk8s-cert.shkube-proxy-key.pemserver-csr.json
admin-csr.jsonca-config.jsonca-key.pemkube-proxy.csrkube-proxy.pemserver-key.pem
admin-key.pemca.csrca.pemkube-proxy-csr.jsonserver.csrserver.pem
//
[root@master k8s-cert]# cp ca*pem server*pem /opt/kubernetes/ssl/
安装etcd
[root@master k8s-cert]# cd ..
[root@master k8s]# ls
etcd-certetcd.shetcd-v3.3.10-linux-amd64etcd-v3.3.10-linux-amd64.tar.gzk8s-cert
[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# tar zxvf kubernetes-server-linux-amd64.tar.gz
//
[root@master bin]# cd /root/k8s/kubernetes/server/bin/
[root@master bin]# cp kube-apiserver kubectl kube-controller-manager kube-scheduler /opt/kubernetes/bin/
[root@master bin]# cd /root/k8s/
//身份令牌
[root@master k8s]# head -c 16 /dev/urandom | od -An -t x | tr -d ''
c4c16d4c 95b7f13c cc5062bf 6561224e
[root@master k8s]# vim /opt/kubernetes/cfg/token.csv
c4c16d4c95b7f13ccc5062bf6561224e,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
//启动apiserver
[root@master k8s]# rz -E
rz waiting to receive.
[root@master k8s]# unzip master.zip
Archive:master.zip
inflating: apiserver.sh
inflating: controller-manager.sh
inflating: scheduler.sh
[root@master k8s]# ls
apiserver.shetcd.shk8s-certmaster.zip
controller-manager.shetcd-v3.3.10-linux-amd64kubernetesscheduler.sh
etcd-certetcd-v3.3.10-linux-amd64.tar.gzkubernetes-server-linux-amd64.tar.gz
[root@master k8s]# bash apiserver.sh 20.0.0.20 https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0:2379
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-apiserver.service to /usr/lib/systemdstem/kube-apiserver.service.
//查看apiserver服务是否开启
[root@master k8s]# ps aux |grep kube[root@master k8s]# cat /opt/kubernetes/cfg/kube-apiserverKUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://20.0.0.20:2379,https://20.0.0.30:2379,https://20.0.0.40:2379 \
--bind-address=20.0.0.20 \
--secure-port=6443 \
--advertise-address=20.0.0.20 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--kubelet-https=true \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem\
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"[root@master k8s]# netstat -antp |grep 6443
tcp00 20.0.0.20:64430.0.0.0:*LISTEN17861/kube-apiserve
tcp00 20.0.0.20:3617020.0.0.20:6443ESTABLISHED 17861/kube-apiserve
tcp00 20.0.0.20:644320.0.0.20:36170ESTABLISHED 17861/kube-apiserve
[root@master k8s]# netstat -antp |grep 8080
tcp00 127.0.0.1:80800.0.0.0:*LISTEN17861/kube-apiserve
//启动scheduler
[root@master k8s]# ./scheduler.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-scheduler.service to /usr/lib/systemdstem/kube-scheduler.service.
//启动controller-manager
[root@master k8s]# chmod +x controller-manager.sh
[root@master k8s]# ./controller-manager.sh 127.0.0.1
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-controller-manager.service to /usr/liystemd/system/kube-controller-manager.service.
//查看kubernetes群集健康状态
[root@master k8s]# /opt/kubernetes/bin/kubectl get cs
NAMESTATUSMESSAGEERROR
controller-managerHealthyok
etcd-1Healthy{"health":"true"}
etcd-2Healthy{"health":"true"}
etcd-0Healthy{"health":"true"}
schedulerHealthyok
Node节点部署
//master上操作,拷贝一些需要的文件过去
[root@master bin]# scp kubelet kube-proxy root@20.0.0.30:/opt/kubernetes/bin/
root@20.0.0.30's password:
kubelet100%168MB 105.8MB/s00:01
kube-proxy100%48MB63.5MB/s00:00
[root@master bin]# scp kubelet kube-proxy root@20.0.0.40:/opt/kubernetes/bin/
root@20.0.0.40's password:
kubelet100%168MB 124.2MB/s00:01
kube-proxy100%48MB77.2MB/s00:00
//node01
解压node.zip,主要是两个脚本kubelet.sh和proxy.sh
[root@node01 ~]# ls
anaconda-ks.cfgflannel-v0.10.0-linux-amd64.tar.gznode.zip公共图片音乐
core.17702initial-setup-ks.cfgproxy.sh模板文档桌面
flannel.shkubelet.shREADME.md视频下载
[root@node01 ~]# cat kubelet.sh
#!/bin/bashNODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}cat </opt/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"EOFcat </opt/kubernetes/cfg/kubelet.configkind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP}
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOFcat </usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@node01 ~]# cat proxy.sh
#!/bin/bashNODE_ADDRESS=$1cat </opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"EOFcat </usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
//master上操作
//创建一个目录,用来存放节点kubelet和kubeproxy需要的文件
[root@master k8s]# mkdir kubeconfig
[root@master k8s]# cd kubeconfig/
//拷贝kubeconfig.sh文件进行重命名
[root@master kubeconfig]# mv kubeconfig.sh kubeconfig
[root@master kubeconfig]# vim kubeconfig
# #创建 TLS Bootstrapping Token
##BOOTSTRAP_TOKEN=$(head -c 16 /dev/urandom | od -An -t x | tr -d ' ')
#BOOTSTRAP_TOKEN=0fb61c46f8991b718eb38d27b605b008#cat > token.csv <
//
要说明的是以上配置文件中的下面这段中的token是需要用自己的
# 设置客户端认证参数
kubectl config set-credentials kubelet-bootstrap \
--token=c4c16d4c95b7f13ccc5062bf6561224e \
--kubeconfig=bootstrap.kubeconfig
如何获得token信息,前面一段c4c16d4c95b7f13ccc5062bf6561224e
[root@master kubeconfig]# cat /opt/kubernetes/cfg/token.csv
c4c16d4c95b7f13ccc5062bf6561224e,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
//设置环境变量
[root@master kubeconfig]# vim /etc/profile
export PATH=$PATH:/opt/kubernetes/bin/
[root@master kubeconfig]# source /etc/profile
//查看节点健康
[root@master kubeconfig]# kubectl get cs
NAMESTATUSMESSAGEERROR
controller-managerHealthyok
schedulerHealthyok
etcd-0Healthy{"health":"true"}
etcd-1Healthy{"health":"true"}
etcd-2Healthy{"health":"true"}
//生成配置文件,bootstrap.kubeconfig,kube-proxy.kubeconfig
[root@master kubeconfig]#bash kubeconfig 20.0.0.20 /root/k8s/k8s-cert/
[root@master kubeconfig]# ls
bootstrap.kubeconfigkubeconfigkube-proxy.kubeconfig
//拷贝配置文件到node节点
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.30:/opt/kubernetes/cfg
root@20.0.0.30's password:
bootstrap.kubeconfig100% 21631.1MB/s00:00
kube-proxy.kubeconfig100% 62695.1MB/s00:00
[root@master kubeconfig]# scp bootstrap.kubeconfig kube-proxy.kubeconfig root@20.0.0.40:/opt/kubernetes/cfg
root@20.0.0.40's password:
bootstrap.kubeconfig100% 21631.3MB/s00:00
kube-proxy.kubeconfig100% 62698.4MB/s00:00
//创建bootstrap角色赋予权限用于连接apiserver请求签名(关键)
[root@master kubeconfig]# kubectl create clusterrolebinding kubelet-bootstrap --clusterrole=system:node-bootstrapper --user=kubelet-bootstrap
clusterrolebinding.rbac.authorization.k8s.io/kubelet-bootstrap created
//在node1节点上操作
//执行脚本会生成配置文件kubelet kubelet.kubeconfig kubelet.config
//生成启动脚本kubelet.service,并启动kubelet
[root@node01 ~]# vim kubelet.sh#!/bin/bashNODE_ADDRESS=$1
DNS_SERVER_IP=${2:-"10.0.0.2"}cat </opt/kubernetes/cfg/kubeletKUBELET_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \\
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \\
--config=/opt/kubernetes/cfg/kubelet.config \\
--cert-dir=/opt/kubernetes/ssl \\
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"EOFcat </opt/kubernetes/cfg/kubelet.configkind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: ${NODE_ADDRESS}
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- ${DNS_SERVER_IP}
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
EOFcat </usr/lib/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
After=docker.service
Requires=docker.service[Service]
EnvironmentFile=/opt/kubernetes/cfg/kubelet
ExecStart=/opt/kubernetes/bin/kubelet \$KUBELET_OPTS
Restart=on-failure
KillMode=process[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kubelet
systemctl restart kubelet
[root@node01 ~]# bash kubelet.sh 192.168.195.150
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node01 ~]# ps aux |grep kube
//这时node01节点的kubelet已经启动了,那么会向master请求授权
//在master上操作,
[root@master kubeconfig]# kubectl get csr
NAMEAGEREQUESTORCONDITION
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs87skubelet-bootstrapPending
//同意授权
[root@master kubeconfig]# kubectl certificate approve node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs
certificatesigningrequest.certificates.k8s.io/node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs approved
[root@master kubeconfig]# kubectl get csr
NAMEAGEREQUESTORCONDITION
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs9m31skubelet-bootstrapApproved,Issued
//查看k8s群集节点
[root@master kubeconfig]# kubectl get nodes
NAMESTATUSROLESAGEVERSION
20.0.0.30Ready54sv1.12.3
//node01启动kube-proxy,执行脚本会生成 kube-proxy,和启动脚本kube-proxy.service
//并可以启动proxy
[root@node01 ~]# vim proxy.sh#!/bin/bashNODE_ADDRESS=$1cat </opt/kubernetes/cfg/kube-proxyKUBE_PROXY_OPTS="--logtostderr=true \\
--v=4 \\
--hostname-override=${NODE_ADDRESS} \\
--cluster-cidr=10.0.0.0/24 \\
--proxy-mode=ipvs \\
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"EOFcat </usr/lib/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Proxy
After=network.target[Service]
EnvironmentFile=-/opt/kubernetes/cfg/kube-proxy
ExecStart=/opt/kubernetes/bin/kube-proxy \$KUBE_PROXY_OPTS
Restart=on-failure[Install]
WantedBy=multi-user.target
EOFsystemctl daemon-reload
systemctl enable kube-proxy
systemctl restart kube-proxy
[root@localhost ~]# systemctl status kube-proxy.service
node02部署
//因为node01已经部署完成,具备了启动服务所需要的文件,所以我们将其拷贝到node02上即可
//查看下文件
[root@node01 opt]# tree kubernetes/
kubernetes/
├── bin
│├── flanneld
│├── kubelet
│├── kube-proxy
│└── mk-docker-opts.sh
├── cfg
│├── bootstrap.kubeconfig
│├── flanneld
│├── kubelet
│├── kubelet.config
│├── kubelet.kubeconfig
│├── kube-proxy
│└── kube-proxy.kubeconfig
└── ssl
├── kubelet-client-2020-09-30-09-42-34.pem
├── kubelet-client-current.pem -> /opt/kubernetes/ssl/kubelet-client-2020-09-30-09-42-34.pem
├── kubelet.crt
└── kubelet.key
[root@node01 ~]# scp -r /opt/kubernetes/ root@20.0.0.40:/opt/
[root@node01 ~]# scp /usr/lib/systemd/system/{kubelet,kube-proxy}.service root@20.0.0.40:/usr/lib/systemd/system/
//其中ssl下的文件是在node01上kubelet找master授权时master给他的授权证书,node02要有自己的,所以删除
[root@node02~]# cd /opt/kubernetes/ssl/
[root@node02 ssl]# rm -rf *
//修改复制过来的配置文件
[root@node02 cfg]# cd /opt/kubernetes/cfg//地址要改成自己的20.0.0.40
[root@node02 cfg]# vim kubeletKUBELET_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.40 \
--kubeconfig=/opt/kubernetes/cfg/kubelet.kubeconfig \
--bootstrap-kubeconfig=/opt/kubernetes/cfg/bootstrap.kubeconfig \
--config=/opt/kubernetes/cfg/kubelet.config \
--cert-dir=/opt/kubernetes/ssl \
--pod-infra-container-image=registry.cn-hangzhou.aliyuncs.com/google-containers/pause-amd64:3.0"[root@node02 cfg]# vim kubelet.config
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
address: 20.0.0.40
port: 10250
readOnlyPort: 10255
cgroupDriver: cgroupfs
clusterDNS:
- 10.0.0.2
clusterDomain: cluster.local.
failSwapOn: false
authentication:
anonymous:
enabled: true
[root@node02 cfg]# vim kube-proxyKUBE_PROXY_OPTS="--logtostderr=true \
--v=4 \
--hostname-override=20.0.0.40 \
--cluster-cidr=10.0.0.0/24 \
--proxy-mode=ipvs \
--kubeconfig=/opt/kubernetes/cfg/kube-proxy.kubeconfig"
//启动服务
[root@node02 cfg]# systemctl start kubelet.service
[root@node02 cfg]#systemctl enable kubelet.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service.
[root@node02 cfg]# systemctl start kube-proxy.service
[root@node02 cfg]# systemctl enable kube-proxy.service
Created symlink from /etc/systemd/system/multi-user.target.wants/kube-proxy.service to /usr/lib/systemd/system/kube-proxy.service.
//master上可以看到来自node02的请求
[root@master kubeconfig]# kubectl get csr
NAMEAGEREQUESTORCONDITION
node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc16skubelet-bootstrapPending
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs30mkubelet-bootstrapApproved,Issued
//同意请求
[root@master kubeconfig]# kubectl certificate approve node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc
certificatesigningrequest.certificates.k8s.io/node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc approved
[root@master kubeconfig]# kubectl get csr
NAMEAGEREQUESTORCONDITION
node-csr-kv2SJewe1T9u0RMPmidU5SyB0WjAhNiSAZOLZdbVAcc74skubelet-bootstrapApproved,Issued
node-csr-leFOs2iguW9ET40Se9veqISkG1ioQgLg8NlB089AWIs31mkubelet-bootstrapApproved,Issued
//查看k8s群集节点
[root@master kubeconfig]# kubectl get nodes
NAMESTATUSROLESAGEVERSION
20.0.0.30Ready22mv1.12.3
20.0.0.40Ready28sv1.12.3
单节点部署完成
以上配置可以实现单master节点的结构,接着继续部署,成为多节点结构
master02部署
20.0.0.10
清空防火墙规则修改主机名master02
//将master02需要的文件都从master上复制过去
[root@master kubernetes]# scp -r /opt/kubernetes/ root@20.0.0.10:/opt/
[root@master k8s]# scp /usr/lib/systemd/system/{kube-apiserver,kube-controller-manager,kube-scheduler}.service root@20.0.0.10:/usr/lib/systemd/system/
[root@master ~]# scp -r /opt/etcd/ root@20.0.0.10:/opt/
//修改复制过来的配置文件
[root@master02 cfg]# vim kube-apiserver
#修改以下两条,改成自己的地址
--bind-address=20.0.0.10 \--advertise-address=20.0.0.10 \
//添加环境变量,以至于可以识别相关命令
[root@master02 cfg]# vim /etc/profileexport PATH=$PATH:/opt/kubernetes/bin/
[root@master02 ~]# source /etc/profile
//获取node信息
[root@master02 ~]# kubectl get nodes
NAMESTATUSROLESAGEVERSION
20.0.0.30Ready29hv1.12.3
20.0.0.40Ready29hv1.12.3
负载均衡部署
//安装nginx,两台负载均衡都是这样的
[root@nginx01 ~]# cd /etc/yum.repos.d/
[root@nginx01 yum.repos.d]# vim nginx.repo
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/7/$basearch/
gpgcheck=0
[root@nginx01 yum.repos.d]# yum -y install nginx//添加四层转发,转发请求至master
[root@nginx01 yum.repos.d]# vim /etc/nginx/nginx.conf
events {
worker_connections1024;
}
#加入以下配置
stream {log_formatmain'$remote_addr $upstream_addr - [$time_local] $status $upstream_bytes_sent';
access_log/var/log/nginx/k8s-access.logmain;
upstream k8s-apiserver {
server 20.0.0.10:6443;
server 20.0.0.20:6443;
}
server {
listen 6443;
proxy_pass k8s-apiserver;
}
}
[root@nginx01 yum.repos.d]# systemctl start nginx
//安装keepalived
[root@nginx01 yum.repos.d]# yum -y install keepalived
[root@nginx01 yum.repos.d]# vim /etc/keepalived/keepalived.conf
! Configuration File for keepalived global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
} vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh"
}vrrp_instance VI_1 {
state MASTER
interface ens33
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 100# 优先级,备服务器设置 90
advert_int 1# 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
20.0.0.8/24
}
track_script {
check_nginx
}
//nginx02上keepalived配置文件为
! Configuration File for keepalived global_defs {
# 接收邮件地址
notification_email {
acassen@firewall.loc
failover@firewall.loc
sysadmin@firewall.loc
}
# 邮件发送地址
notification_email_from Alexandre.Cassen@firewall.loc
smtp_server 127.0.0.1
smtp_connect_timeout 30
router_id NGINX_MASTER
} vrrp_script check_nginx {
script "/etc/nginx/check_nginx.sh"
}vrrp_instance VI_1 {
state BACKUP
interface ens33
virtual_router_id 51 # VRRP 路由 ID实例,每个实例是唯一的
priority 90# 优先级,备服务器设置 90
advert_int 1# 指定VRRP 心跳包通告间隔时间,默认1秒
authentication {
auth_type PASS
auth_pass 1111
}
virtual_ipaddress {
20.0.0.8/24
}
track_script {
check_nginx
}
}
//准备在keepalived文件中需要的脚本,用来查看nginx服务的状态,如果nginx停了那么keepalived也停止
[root@nginx01 ~]# vim /etc/nginx/check_nginx.sh
count=$(ps -ef |grep nginx |egrep -cv "grep|$$")if [ "$count" -eq 0 ];
then
systemctl stop keepalived
fi
[root@nginx01 ~]# chmod +x /etc/nginx/check_nginx.sh
[root@nginx01 ~]# systemctl start keepalived
//检查VIP漂移情况
VIP首先应该在nginx01上,当nginx01的nginx挂了之后应该在nginx02 上
//验证地址漂移(nginx01中使用pkill nginx,再在nginx02中使用ip a 查看)
//恢复操作(在nginx01中先启动nginx服务,再启动keepalived服务)
//nginx站点/usr/share/nginx/html
修改配置文件,使node节点指向负载均衡节点
//在之前的配置中,node节点找的是其中一台master,现在要改成找负载均衡中的VIP
//开始修改node节点配置文件统一VIP(bootstrap.kubeconfig,kubelet.kubeconfig)
//两个node节点上都要改
[root@node01 cfg]# vim /opt/kubernetes/cfg/bootstrap.kubeconfig
[root@node01 cfg]# vim /opt/kubernetes/cfg/kubelet.kubeconfig
[root@node01 cfg]# vim /opt/kubernetes/cfg/kube-proxy.kubeconfig
//统统修改为VIP
server: https://20.0.0.8:6443
[root@node01 cfg]# systemctl restart kubelet.service
[root@node01 cfg]# systemctl restart kube-proxy.service
//在nginx上就可查看到node节点
[root@nginx01 ~]# tail /var/log/nginx/k8s-access.log
20.0.0.30 20.0.0.10:6443 - [01/Oct/2020:16:07:16 +0800] 200 1114
20.0.0.30 20.0.0.10:6443 - [01/Oct/2020:16:07:16 +0800] 200 1114
20.0.0.40 20.0.0.10:6443 - [01/Oct/2020:16:10:02 +0800] 200 1115
20.0.0.40 20.0.0.20:6443 - [01/Oct/2020:16:10:02 +0800] 200 1116
//在master上创建pod
[root@master02 ~]# kubectl run nginx --image=nginx
kubectl run --generator=deployment/apps.v1beta1 is DEPRECATED and will be removed in a future version. Use kubectl create instead.
deployment.apps/nginx created
[root@master02 ~]# kubectl get pods
NAMEREADYSTATUSRESTARTSAGE
nginx-dbddb74b8-cfkst0/1ContainerCreating015s
[root@master02 ~]# kubectl get pods
NAMEREADYSTATUSRESTARTSAGE
nginx-dbddb74b8-cfkst1/1Running037s
//查看日志
//需要授权,不然会想以下这样报错
[root@master02 ~]# kubectl logs nginx-dbddb74b8-cfkst
Error from server (Forbidden): Forbidden (user=system:anonymous, verb=get, resource=nodes, subresource=proxy) ( pods/log nginx-dbddb74b8-cfkst)
//授权,即可查看
[root@master02 ~]# kubectl create clusterrolebinding cluster-system-anonymous --clusterrole=cluster-admin --user=system:anonymous
clusterrolebinding.rbac.authorization.k8s.io/cluster-system-anonymous created
[root@master02 ~]# kubectl logs nginx-dbddb74b8-cfkst
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete;
ready for start up
//查看pod网络
[root@master02 ~]# kubectl get pods -o wide
NAMEREADYSTATUSRESTARTSAGEIPNODENOMINATED NODE
nginx-dbddb74b8-cfkst1/1Running011m172.17.3.320.0.0.30
//到对应的pod上可查看网页
[root@node01 ~]# curl 172.17.3.3
Welcome to nginx! - 锐客网 body {
width: 35em;
margin: 0 auto;
font-family: Tahoma, Verdana, Arial, sans-serif;
}
Welcome to nginx!If you see this page, the nginx web server is successfully installed and
working. Further configuration is required.
For online documentation and support please refer to
nginx.org.
Commercial support is available at
nginx.com.
Thank you for using nginx.
//master可查看到访问日志
[root@master02 ~]# kubectl logs nginx-dbddb74b8-cfkst
/docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
/docker-entrypoint.sh: Looking for shell scripts in /docker-entrypoint.d/
/docker-entrypoint.sh: Launching /docker-entrypoint.d/10-listen-on-ipv6-by-default.sh
10-listen-on-ipv6-by-default.sh: Getting the checksum of /etc/nginx/conf.d/default.conf
10-listen-on-ipv6-by-default.sh: Enabled listen on IPv6 in /etc/nginx/conf.d/default.conf
/docker-entrypoint.sh: Launching /docker-entrypoint.d/20-envsubst-on-templates.sh
/docker-entrypoint.sh: Configuration complete;
ready for start up
e>
推荐阅读
- kubernetes|K8s 系列(五) - 浅谈 CSI
- DevOps|K8S之HELM详细介绍
- Docker|Docker系列之搭建ELK日志分析平台
- NAS|群晖NAS-Docker中部署青龙面板-JD薅羊毛
- 青龙教程资源分享|青龙面板跑九章头条-测试稳定
- 青龙教程资源分享|青龙面板薅羊毛–都爱玩(日收益2元左右)
- Docker|Dokcer安装青龙面板-京东豆天天领到手软
- 一个使用 Shell 脚本实现的 Docker
- java|Go 1.18 二进制文件的信息嵌入