kubernetes|kubernetes 实战 glusterfs

系统环境

  • CentOS Linux release 7.5.1804 (Core)
  • Docker version 1.13.1, build 8633870/1.13.1
  • Kubernetes v1.10.0
服务器 IP 角色
master-192 172.30.81.192 k8s-master,gluster-node
node-193 172.30.81.193 k8s-node,glutser-node
node-194 172.30.81.194 k8s-node,gluster-client
部署gluster gluster 集群部署 安装软件
yum install centos-release-gluster -y
yum install -y glusterfs glusterfs-server glusterfs-fuse glusterfs-rdma glusterfs-geo-replication glusterfs-devel
/etc/hosts文件copy到各主机
127.0.0.1localhost localhost.localdomain localhost4 localhost4.localdomain4 ::1localhost localhost.localdomain localhost6 localhost6.localdomain6 172.30.81.192 master-192 172.30.81.193 node-193 172.30.81.194 node-194

启动加入集群
systemctl enable glusterd
systemctl start glusterd
gluster peer probe node-193
[root@master-192 glusterfs]# gluster peer status Number of Peers: 1Hostname: node-193 Uuid: c9114119-3601-4b20-ba42-7272e4bf72f5 State: Peer in Cluster (Connected)

创建k8s volume
gluster volume create k8s-volume replica 2 master-192:/data/ node-193:/data1/
gluster volume start k8s-volume
[root@master-192 glusterfs]# gluster volume info Volume Name: k8s-volume Type: Replicate Volume ID: e61f74c7-9f69-40b5-9211-fc1446493009 Status: Started Snapshot Count: 0 Number of Bricks: 1 x 2 = 2 Transport-type: tcp Bricks: Brick1: master-192:/data Brick2: node-193:/data1 Options Reconfigured: transport.address-family: inet nfs.disable: on performance.client-io-threads: off

glusterfs客户端挂载测试 yum install centos-release-gluster -y
yum install -y glusterfs-fuse glusterfs
[root@node-194 /]# mount -t glusterfs 172.30.81.192:k8s-volume /mnt [root@node-194 /]# ls /mnt/ index.htmllost+found

k8s使用glusterfs 1.创建glusterfs-endpoints.json
{ "kind": "Endpoints", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "subsets": [ { "addresses": [ { "ip": "172.30.81.192" } ], "ports": [ { "port": 1 } ] }, { "addresses": [ { "ip": "172.30.81.193" } ], "ports": [ { "port": 1 } ] } ] }

kubectl create -f glusterfs-endpoints.json
2.创建glusterfs-service.json
{ "kind": "Service", "apiVersion": "v1", "metadata": { "name": "glusterfs-cluster" }, "spec": { "ports": [ {"port": 1} ] } }

kubectl create -f glusterfs-service.json
3.创建pv,pvc
glusterfs-pv.yaml
apiVersion: v1 kind: PersistentVolume metadata: name: pv spec: capacity: storage: 10Gi accessModes: - ReadWriteMany glusterfs: endpoints: "glusterfs-cluster" path: "k8s-volume" readOnly: false

glusterfs-pvc.yaml
apiVersion: v1 kind: PersistentVolumeClaim metadata: name: pvc spec: accessModes: - ReadWriteMany resources: requests: storage: 2Gi

kubectl create -f glusterfs-pv.yaml
kubectl create -f glusterfs-pvc.yaml
4.创建pod使用pvc
test.yaml
apiVersion: extensions/v1beta1 kind: Deployment metadata: name: test1 spec: replicas: 1 template: metadata: labels: app: test1 spec: containers: - name: test1 image: nginx volumeMounts: - name: gs mountPath: /usr/share/nginx/html volumes: - name: gs persistentVolumeClaim: claimName: pvcnodeSelector: kubernetes.io/hostname: node-194--- apiVersion: v1 kind: Service metadata: labels: app: test1 name: test1 namespace: default spec: selector: app: test1 ports: - port: 80 type: NodePort

【kubernetes|kubernetes 实战 glusterfs】kubectl create -f test.yaml

    推荐阅读