Ansible部署K8s集群的方法
目录
- 检查网络:k8s-check.yaml检查k8s各主机的网络是否可达;
- 检查k8s各主机操作系统版本是否达到要求;
- 配置k8s集群dns解析:k8s-hosts-cfg.yaml
- 配置yum源:k8s-yum-cfg.yaml
- 时钟同步:k8s-time-sync.yaml
- 禁用iptable、firewalld、NetworkManager服务
- 禁用SElinux、swap:k8s-SE-swap-disable.yaml
- 修改内核:k8s-kernel-cfg.yaml
- 配置ipvs:k8s-ipvs-cfg.yaml
- 安装docker:k8s-docker-install.yaml
- 安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml
- 安装集群镜像:k8s-apps-images.yaml
- k8s集群初始化:k8s-cluster-init.yaml
主机 | IP地址 | 组件 |
---|---|---|
ansible | 192.168.175.130 | ansible |
master | 192.168.175.140 | docker,kubectl,kubeadm,kubelet |
node | 192.168.175.141 | docker,kubectl,kubeadm,kubelet |
node | 192.168.175.142 | docker,kubectl,kubeadm,kubelet |
$ ansible-playbook -v k8s-time-sync.yaml --syntax-check$ ansible-playbook -v k8s-*.yaml -C $ ansible-playbook -v k8s-yum-cfg.yaml -C --start-at-task="Clean origin dir" --step$ ansible-playbook -v k8s-kernel-cfg.yaml --step
主机inventory文件:
/root/ansible/hosts
[k8s_cluster]master ansible_host=192.168.175.140node1ansible_host=192.168.175.141node2ansible_host=192.168.175.142[k8s_cluster:vars]ansible_port=22ansible_user=rootansible_password=hello123
检查网络:k8s-check.yaml检查
k8s
各主机的网络是否可达;
检查
k8s
各主机操作系统版本是否达到要求;- name: step01_checkhosts: k8s_clustergather_facts: notasks:- name: check networkshell:cmd: "ping -c 3 -m 2 {{ansible_host}}"delegate_to: localhost- name: get system versionshell: cat /etc/system-releaseregister: system_release- name: check system versionvars:system_version: "{{ system_release.stdout | regex_search('([7-9].[0-9]+).*?') }}"suitable_version: 7.5debug:msg: "{{ 'The version of the operating system is '+ system_version +', suitable!' if (system_version | float >= suitable_version) else 'The version of the operating system is unsuitable' }}"
调试命令:
$ ansible-playbook --ssh-extra-args '-o StrictHostKeyChecking=no' -v -C k8s-check.yaml$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v -C k8s-check.yaml$ ANSIBLE_HOST_KEY_CHECKING=False ansible-playbook -v k8s-check.yaml --start-at-task="get system version"
- 连接配置:k8s-conn-cfg.yaml在
ansible
服务器的/etc/hosts
文件中添加k8s
主机名解析配置 - 生成密钥对,配置
ansible
免密登录到k8s
各主机
- name: step02_conn_cfghosts: k8s_clustergather_facts: novars_prompt:- name: RSAprompt: Generate RSA or not(Yes/No)?default: "no"private: no- name: passwordprompt: input your login password?default: "hello123"tasks:- name: Add DNS of k8s to ansibledelegate_to: localhostlineinfile:path: /etc/hostsline: "{{ansible_host}}{{inventory_hostname}}"backup: yes- name: Generate RSArun_once: trueshell:cmd: ssh-keygen -t rsa -f ~/.ssh/id_rsa -N ''creates: /root/.ssh/id_rsawhen: RSA | bool- name: Configure password free loginshell: |/usr/bin/ssh-keyscan {{ ansible_host }} >> /root/.ssh/known_hosts 2> /dev/null/usr/bin/ssh-keyscan {{ inventory_hostname }} >> /root/.ssh/known_hosts 2> /dev/null/usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ ansible_host }}#/usr/bin/sshpass -p'{{ password }}' ssh-copy-id root@{{ inventory_hostname }}- name: Test sshshell: hostname
执行:
$ ansible-playbook k8s-conn-cfg.yamlGenerate RSA or not(Yes/No)? [no]: yesinput your login password? [hello123]:PLAY [step02_conn_cfg] **********************************************************************************************************TASK [Add DNS of k8s to ansible] ************************************************************************************************ok: [master -> localhost]ok: [node1 -> localhost]ok: [node2 -> localhost]TASK [Generate RSA] *************************************************************************************************************changed: [master -> localhost]TASK [Configure password free login] ********************************************************************************************changed: [node1 -> localhost]changed: [node2 -> localhost]TASK [Test ssh] *****************************************************************************************************************changed: [master]changed: [node1]changed: [node2]PLAY RECAP **********************************************************************************************************************master: ok=4changed=3unreachable=0failed=0skipped=0rescued=0ignored=0node1: ok=3changed=2unreachable=0failed=0skipped=0rescued=0ignored=0node2: ok=3changed=2unreachable=0failed=0skipped=0rescued=0ignored=0
配置k8s集群dns解析: k8s-hosts-cfg.yaml
- 设置主机名
/etc/hosts
文件中互相添加dns解析
- name: step03_cfg_hosthosts: k8s_clustergather_facts: notasks:- name: set hostnamehostname:name: "{{ inventory_hostname }}"use: systemd- name: Add dns to each otherlineinfile:path: /etc/hostsbackup: yesline: "{{item.value.ansible_host}}{{item.key}}"loop: "{{ hostvars | dict2items }}"loop_control:label: "{{ item.key }} {{ item.value.ansible_host }}"
执行:
$ ansible-playbook k8s-hosts-cfg.yamlPLAY [step03_cfg_host] **********************************************************************************************************TASK [set hostname] *************************************************************************************************************ok: [master]ok: [node1]ok: [node2]TASK [Add dns to each other] ****************************************************************************************************ok: [node2] => (item=node1 192.168.175.141)ok: [master] => (item=node1 192.168.175.141)ok: [node1] => (item=node1 192.168.175.141)ok: [node2] => (item=node2 192.168.175.142)ok: [master] => (item=node2 192.168.175.142)ok: [node1] => (item=node2 192.168.175.142)ok: [node2] => (item=master 192.168.175.140)ok: [master] => (item=master 192.168.175.140)ok: [node1] => (item=master 192.168.175.140)PLAY RECAP **********************************************************************************************************************master: ok=2changed=0unreachable=0failed=0skipped=0rescued=0ignored=0node1: ok=2changed=0unreachable=0failed=0skipped=0rescued=0ignored=0node2: ok=2changed=0unreachable=0failed=0skipped=0rescued=0ignored=0
配置yum源:k8s-yum-cfg.yaml
- name: step04_yum_cfghosts: k8s_clustergather_facts: notasks:- name: Create back-up directoryfile:path: /etc/yum.repos.d/org/state: directory- name: Back-up old Yum filesshell:cmd: mv -f /etc/yum.repos.d/*.repo /etc/yum.repos.d/org/removes: /etc/yum.repos.d/org/- name: Add new Yum filescopy:src: ./files_yum/dest: /etc/yum.repos.d/- name: Check yum.repos.dcmd: ls /etc/yum.repos.d/*
时钟同步:k8s-time-sync.yaml
- name: step05_time_synchosts: k8s_clustergather_facts: notasks:- name: Start chronyd.servicesystemd:name: chronyd.servicestate: startedenabled: yes- name: Modify time zone & clockshell: |cp -f /usr/share/zoneinfo/Asia/Shanghai /etc/localtimeclock -whwclock -w- name: Check time nowcommand: date
禁用iptable、firewalld、NetworkManager服务
- name: step06_net_servicehosts: k8s_clustergather_facts: notasks:- name: Stop some services for netsystemd:name: "{{ item }}"state: stoppedenabled: noloop:- firewalld- iptables- NetworkManager
执行:
$ ansible-playbook -v k8s-net-service.yaml... ...failed: [master] (item=iptables) => {"ansible_loop_var": "item","changed": false,"item": "iptables"}MSG:Could not find the requested service iptables: hostPLAY RECAP **********************************************************************************************************************master: ok=0changed=0unreachable=0failed=1skipped=0rescued=0ignored=0node1: ok=0changed=0unreachable=0failed=1skipped=0rescued=0ignored=0node2: ok=0changed=0unreachable=0failed=1skipped=0rescued=0ignored=0
禁用SElinux、swap:k8s-SE-swap-disable.yaml
- name: step07_net_servicehosts: k8s_clustergather_facts: notasks:- name: SElinux disabledlineinfile:path: /etc/selinux/configline: SELINUX=disabledregexp: ^SELINUX=state: presentbackup: yes- name: Swap disabledpath: /etc/fstabline: '#\1'regexp: '(^/dev/mapper/centos-swap.*$)'backrefs: yes
修改内核:k8s-kernel-cfg.yaml
- name: step08_kernel_cfghosts: k8s_clustergather_facts: notasks:- name: Create /etc/sysctl.d/kubernetes.confcopy:content: ''dest: /etc/sysctl.d/kubernetes.confforce: yes- name: Cfg bridge and ip_forwardlineinfile:path: /etc/sysctl.d/kubernetes.confline: "{{ item }}"state: presentloop:- 'net.bridge.bridge-nf-call-ip6tables = 1'- 'net.bridge.bridge-nf-call-iptables = 1'- 'net.ipv4.ip_forward = 1'- name: Load cfgshell:cmd: |sysctl -pmodprobe br_netfilterremoves: /etc/sysctl.d/kubernetes.conf- name: Check cfgcmd: '[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3'
执行:
$ ansible-playbook -v k8s-kernel-cfg.yaml --stepTASK [Check cfg] ****************************************************************************************************************changed: [master] => {"changed": true,"cmd": "[ $(lsmod | grep br_netfilter | wc -l) -ge 2 ] && exit 0 || exit 3","delta": "0:00:00.011574","end": "2022-02-27 04:26:01.332896","rc": 0,"start": "2022-02-27 04:26:01.321322"}changed: [node2] => {"delta": "0:00:00.016331","end": "2022-02-27 04:26:01.351208","start": "2022-02-27 04:26:01.334877"changed: [node1] => {"delta": "0:00:00.016923","end": "2022-02-27 04:26:01.355983","start": "2022-02-27 04:26:01.339060"PLAY RECAP **********************************************************************************************************************master: ok=4changed=4unreachable=0failed=0skipped=0rescued=0ignored=0node1: ok=4changed=4unreachable=0failed=0skipped=0rescued=0ignored=0node2: ok=4changed=4unreachable=0failed=0skipped=0rescued=0ignored=0
配置ipvs:k8s-ipvs-cfg.yaml
- name: step09_ipvs_cfghosts: k8s_clustergather_facts: notasks:- name: Install ipset and ipvsadmyum:name: "{{ item }}"state: presentloop:- ipset- ipvsadm- name: Load modulesshell: |modprobe -- ip_vsmodprobe -- ip_vs_rrmodprobe -- ip_vs_wrrmodprobe -- ip_vs_shmodprobe -- nf_conntrack_ipv4- name: Check cfgshell:cmd: '[ $(lsmod | grep -e -ip_vs -e nf_conntrack_ipv4 | wc -l) -ge 2 ] && exit 0 || exit 3'
安装docker:k8s-docker-install.yaml
- name: step10_docker_installhosts: k8s_clustergather_facts: notasks:- name: Install docker-ceyum:name: docker-ce-18.06.3.ce-3.el7state: present- name: Cfg dockercopy:src: ./files_docker/daemon.jsondest: /etc/docker/- name: Start dockersystemd:name: docker.servicestate: startedenabled: yes- name: Check docker versionshell:cmd: docker --version
安装k8s组件[kubeadm\kubelet\kubectl]:k8s-install-kubepkgs.yaml
- name: step11_k8s_install_kubepkgshosts: k8s_clustergather_facts: notasks:- name: Install k8s componentsyum:name: "{{ item }}"state: presentloop:- kubeadm-1.17.4-0- kubelet-1.17.4-0- kubectl-1.17.4-0- name: Cfg k8scopy:src: ./files_k8s/kubeletdest: /etc/sysconfig/force: nobackup: yes- name: Start kubeletsystemd:name: kubelet.servicestate: startedenabled: yes
安装集群镜像:k8s-apps-images.yaml
- name: step12_apps_imageshosts: k8s_clustergather_facts: novars:apps:- kube-apiserver:v1.17.4- kube-controller-manager:v1.17.4- kube-scheduler:v1.17.4- kube-proxy:v1.17.4- pause:3.1- etcd:3.4.3-0- coredns:1.6.5vars_prompt:- name: cfg_pythonprompt: Do you need to install docker pkg for python(Yes/No)?default: "no"private: notasks:- block:- name: Install python-pipyum:name: python-pipstate: present- name: Install docker pkg for pythonshell:cmd: |pip install docker==4.4.4pip install websocket-client==0.32.0creates: /usr/lib/python2.7/site-packages/docker/when: cfg_python | bool- name: Pull imagescommunity.docker.docker_image:name: "registry.cn-hangzhou.aliyuncs.com/google_containers/{{ item }}"source: pullloop: "{{ apps }}"- name: Tag imagesrepository: "k8s.gcr.io/{{ item }}"force_tag: yessource: local- name: Remove images for alistate: absent
执行:
$ ansible-playbook k8s-apps-images.yamlDo you need to install docker pkg for python(Yes/No)? [no]:PLAY [step12_apps_images] *******************************************************************************************************TASK [Install python-pip] *******************************************************************************************************skipping: [node1]skipping: [master]skipping: [node2]TASK [Install docker pkg for python] ********************************************************************************************TASK [Pull images] **************************************************************************************************************changed: [node1] => (item=kube-apiserver:v1.17.4)changed: [node2] => (item=kube-apiserver:v1.17.4)changed: [master] => (item=kube-apiserver:v1.17.4)changed: [node1] => (item=kube-controller-manager:v1.17.4)changed: [master] => (item=kube-controller-manager:v1.17.4)changed: [node1] => (item=kube-scheduler:v1.17.4)changed: [master] => (item=kube-scheduler:v1.17.4)changed: [node1] => (item=kube-proxy:v1.17.4)changed: [node2] => (item=kube-controller-manager:v1.17.4)changed: [master] => (item=kube-proxy:v1.17.4)changed: [node1] => (item=pause:3.1)changed: [master] => (item=pause:3.1)changed: [node2] => (item=kube-scheduler:v1.17.4)changed: [node1] => (item=etcd:3.4.3-0)changed: [master] => (item=etcd:3.4.3-0)changed: [node2] => (item=kube-proxy:v1.17.4)changed: [node1] => (item=coredns:1.6.5)changed: [master] => (item=coredns:1.6.5)changed: [node2] => (item=pause:3.1)changed: [node2] => (item=etcd:3.4.3-0)changed: [node2] => (item=coredns:1.6.5)TASK [Tag images] ***************************************************************************************************************ok: [node1] => (item=kube-apiserver:v1.17.4)ok: [master] => (item=kube-apiserver:v1.17.4)ok: [node2] => (item=kube-apiserver:v1.17.4)ok: [node1] => (item=kube-controller-manager:v1.17.4)ok: [master] => (item=kube-controller-manager:v1.17.4)ok: [node2] => (item=kube-controller-manager:v1.17.4)ok: [master] => (item=kube-scheduler:v1.17.4)ok: [node1] => (item=kube-scheduler:v1.17.4)ok: [node2] => (item=kube-scheduler:v1.17.4)ok: [master] => (item=kube-proxy:v1.17.4)ok: [node1] => (item=kube-proxy:v1.17.4)ok: [node2] => (item=kube-proxy:v1.17.4)ok: [master] => (item=pause:3.1)ok: [node1] => (item=pause:3.1)ok: [node2] => (item=pause:3.1)ok: [master] => (item=etcd:3.4.3-0)ok: [node1] => (item=etcd:3.4.3-0)ok: [node2] => (item=etcd:3.4.3-0)ok: [master] => (item=coredns:1.6.5)ok: [node1] => (item=coredns:1.6.5)ok: [node2] => (item=coredns:1.6.5)TASK [Remove images for ali] ****************************************************************************************************PLAY RECAP **********************************************************************************************************************master: ok=3changed=2unreachable=0failed=0skipped=2rescued=0ignored=0node1: ok=3changed=2unreachable=0failed=0skipped=2rescued=0ignored=0node2: ok=3changed=2unreachable=0failed=0skipped=2rescued=0ignored=0
k8s集群初始化:k8s-cluster-init.yaml
- name: step13_cluster_inithosts: mastergather_facts: notasks:- block:- name: Kubeadm initshell:cmd:kubeadm init--apiserver-advertise-address={{ ansible_host }}--kubernetes-version=v1.17.4--service-cidr=10.96.0.0/12--pod-network-cidr=10.244.0.0/16--image-repository registry.aliyuncs.com/google_containers- name: Create /root/.kubefile:path: /root/.kube/state: directoryowner: rootgroup: root- name: Copy /root/.kube/configcopy:src: /etc/kubernetes/admin.confdest: /root/.kube/configremote_src: yesbackup: yes- name: Copy kube-flannelsrc: ./files_k8s/kube-flannel.ymldest: /root/- name: Apply kube-flannelcmd: kubectl apply -f /root/kube-flannel.yml- name: Get tokencmd: kubeadm token create --print-join-commandregister: join_token- name: debug join_tokendebug:var: join_token.stdout
【Ansible部署K8s集群的方法】到此这篇关于Ansible部署K8s集群的文章就介绍到这了,更多相关Ansible部署K8s集群内容请搜索脚本之家以前的文章或继续浏览下面的相关文章希望大家以后多多支持脚本之家!
推荐阅读
- 通过|通过 hexo+serverless 快速搭建并部署一个自己的博客
- ansible二进制部署kubernetes集群
- 做企业文档管理,不懂|做企业文档管理,不懂 SaaS 和私有化部署(别慌!看这里)
- 如何为k8s中的pod配置QoS等级()
- 使用虚拟机在VirtualBox+openEuler上安装部署openGauss数据库
- 在|在 Nebula K8s 集群中使用 nebula-spark-connector 和 nebula-algorithm
- Spring Boot项目微信云托管入门部署
- Spring|Spring Boot项目微信云托管入门部署
- .NET的两种部署模式,了解一下
- 微服务从代码到k8s部署应有尽有系列(十二、链路追踪)