[csi]浅聊ceph-csi组件
描述
??ceph-csi扩展各种存储类型的卷的管理能力,实现第三方存储ceph的各种操作能力与k8s存储系统的结合。调用第三方存储ceph的接口或命令,从而提供ceph数据卷的创建/删除、挂载/解除挂载的具体操作实现。前面分析组件中的对于数据卷的创建/删除、挂载/解除挂载操作,全是调用ceph-csi,然后由ceph-csi调用ceph提供的命令或接口来完成最终的操作。
ceph-csi服务组成
??ceph-csi含有rbdType
、cephfsType
、livenessType
三大类型服务。rbdType
主要进行rbd的操作完成与ceph的交互,cephfsType
主要进行cephfs的操作完成与ceph交互,livenessType
该服务主要是定时向csi endpoint探测csi组件的存活(向指定的socket地址发送probe请求),然后统计到prometheus指标中。
- ControllerServer:主要负责创建、删除cephfs/rbd存储等操作。
- NodeServer:部署在k8s中的每个node上,主要负责cephfs、rbd在node节点上相关的操作,如将存储挂载到node上,解除node上存储挂载等操作。
- IdentityServer:主要是返回自身服务的相关信息,如返回服务身份信息(名称与版本等信息)、返回服务具备的能力、暴露存活探测接口(用于给别的组件/服务探测该服务是否存活)等。
文章图片
- 部署步骤
部署请参考我的这个文章《k8s基于csi使用rbd存储》。
- 部署的组件介绍[rbd csi为例]
csi-rbdplugin-provisioner.yaml部署的相关组件:csi-provisioner
、csi-snapshotter
、csi-attacher
、csi-resizer
、csi-rbdplugin
、liveness-prometheus
5个容器,作用分别如下。
- csi-provisioner:实际上是external-provisioner组件。在create pvc时,csi-provisioner参与存储资源与pv对象的创建。csi-provisioner组件监听到pvc创建事件后,负责拼接请求,调用ceph-csi组件(即csi-rbdplugin容器)的CreateVolume方法来创建存储,创建存储成功后,创建pv对象;delete pvc时,csi-provisioner参与存储资源与pv对象的删除。当pvc被删除时,pv controller会将其绑定的pv对象状态由bound更新为release,csi-provisioner监听到pv更新事件后,调用ceph-csi组件(即csi-rbdplugin容器)的DeleteVolume方法来删除存储,并删除pv对象。
? - csi-snapshotter:实际上是external-snapshotter组件,负责处理存储快照相关的操作。
? - csi-attacher:实际上是external-attacher组件,只负责操作VolumeAttachment对象,实际上并没有操作存储。
? - csi-resizer:实际上是external-resizer组件,负责处理存储扩容相关的操作。
? - csi-rbdplugin:实际上是ceph-csi组件,rbdType-ControllerServer/IdentityServer类型的服务。create pvc时,external-provisioner组件(即csi-provisioner容器)监听到pvc创建事件后,负责拼接请求,然后调用csi-rbdplugin容器的CreateVolume方法来创建存储;delete pvc时,pv对象状态由bound变为release,external-provisioner组件(即csi-provisioner容器)监听到pv更新事件后,负责拼接请求,调用csi-rbdplugin容器的DeleteVolume方法来删除存储。
? - liveness-prometheus:实际上是ceph-csi组件,livenessType类型的服务。负责探测并上报csi-rbdplugin服务的存活情况。
driver-registrar
、csi-rbdplugin
、liveness-prometheus
3个容器,作用分别如下。
- driver-registrar:向kubelet传入csi-rbdplugin容器提供服务的socket地址、版本信息和驱动名称(如rbd.csi.ceph.com)等,将csi-rbdplugin容器服务注册给kubelet。
? - csi-rbdplugin:实际上是ceph-csi组件,rbdType-NoderServer/IdentityServer类型的服务。create pod cliam pvc时,kubelet会调用csi-rbdplugin容器将创建好的存储从ceph集群挂载到pod所在的node上,然后再挂载到pod相应的目录上;delete pod cliam pvc时,kubelet会调用csi-rbdplugin容器的相应方法,解除存储在pod目录上的挂载,再解除存储在node上的挂载。
? - liveness-prometheus:实际上是ceph-csi组件,livenessType类型的服务。负责探测并上报csi-rbdplugin服务的存活情况。
- csi-provisioner:实际上是external-provisioner组件。在create pvc时,csi-provisioner参与存储资源与pv对象的创建。csi-provisioner组件监听到pvc创建事件后,负责拼接请求,调用ceph-csi组件(即csi-rbdplugin容器)的CreateVolume方法来创建存储,创建存储成功后,创建pv对象;delete pvc时,csi-provisioner参与存储资源与pv对象的删除。当pvc被删除时,pv controller会将其绑定的pv对象状态由bound更新为release,csi-provisioner监听到pv更新事件后,调用ceph-csi组件(即csi-rbdplugin容器)的DeleteVolume方法来删除存储,并删除pv对象。
$ kubectl get pod
NAMEREADYSTATUSRESTARTSAGE
csi-rbdplugin-9tfnm3/3Running026h
csi-rbdplugin-provisioner-5cc9f558c7-d2stz7/7Running026h
$ kubectl get pvc,pv
NAMESTATUSVOLUMECAPACITYACCESS MODESSTORAGECLASSAGE
persistentvolumeclaim/raw-block-pvcBoundpvc-4e52c163-a593-4cc1-af59-23367d1e75732GiRWOcsi-rbd-sc9m47sNAMECAPACITYACCESS MODESRECLAIM POLICYSTATUSCLAIMSTORAGECLASSREASONAGE
persistentvolume/pvc-4e52c163-a593-4cc1-af59-23367d1e75732GiRWODeleteBounddefault/raw-block-pvccsi-rbd-sc9m47s
# 创建操作: 创建pvc的日志
$ kubectl apply -f raw-block-pvc.yaml
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-provisioner
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-rbdplugin
# csi-provisioner log: 接收到创建pvc的指令,下发指令创建
I0310 09:21:35.3239911 connection.go:183] GRPC call: /csi.v1.Controller/CreateVolume
I0310 09:21:35.3909971 controller.go:777] create volume rep: {CapacityBytes:2147483648 VolumeId:0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b VolumeContext:map[clusterID:4a9e463a-4853-4237-a5c5-9ae9d25bacda csi.storage.k8s.io/pv/name:pvc-4e52c163-a593-4cc1-af59-23367d1e7573 csi.storage.k8s.io/pvc/name:raw-block-pvc csi.storage.k8s.io/pvc/namespace:default imageFeatures:layering imageName:csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b journalPool:kubernetes pool:kubernetes] ContentSource: AccessibleTopology:[] XXX_NoUnkeyedLiteral:{} XXX_unrecognized:[] XXX_sizecache:0}
I0310 09:21:35.3910461 controller.go:861] successfully created PV pvc-4e52c163-a593-4cc1-af59-23367d1e7573 for PVC raw-block-pvc and csi volume name 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b# csi-rbdplugin log:收到创建存储指令,调用rbd创建pv成功
I0310 09:21:35.3507551 rbd_journal.go:482] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 generated Volume ID (0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-7b50e081-a053-11ec-b2dd-fa163ed7971b) and image name (csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b) for request name (pvc-4e52c163-a593-4cc1-af59-23367d1e7573)
I0310 09:21:35.3508221 rbd_util.go:352] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 rbd: create kubernetes/csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b size 2048M (features: [layering]) using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789
I0310 09:21:35.3741651 controllerserver.go:666] ID: 1609 Req-ID: pvc-4e52c163-a593-4cc1-af59-23367d1e7573 created image kubernetes/csi-vol-7b50e081-a053-11ec-b2dd-fa163ed7971b backed for request name pvc-4e52c163-a593-4cc1-af59-23367d1e7573
# 删除操作:删除pvc的日志
$ kubectl delete -f raw-block-pvc.yaml
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-provisioner
$ kubectl logs -f --tail=100 csi-rbdplugin-provisioner-5cc9f558c7-d2stz -c csi-rbdplugin
# csi-provisioner log: 接收到删除pvc的请求,进行下发指令执行删除pv
I0310 09:11:52.3017231 controller.go:1413] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": started
I0310 09:11:52.3066521 connection.go:183] GRPC call: /csi.v1.Controller/DeleteVolume
I0310 09:11:52.3066711 connection.go:184] GRPC request: {"secrets":"***stripped***","volume_id":"0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b"}
I0310 09:11:53.0881511 connection.go:186] GRPC response: {}
I0310 09:11:53.0882041 connection.go:187] GRPC error:
I0310 09:11:53.0882201 controller.go:1428] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": volume deleted
I0310 09:11:53.0982601 controller.go:1478] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": persistentvolume deleted
I0310 09:11:53.0982901 controller.go:1483] delete "pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee": succeeded
I0310 09:11:54.9155431 leaderelection.go:278] successfully renewed lease default/rbd-csi-ceph-com# csi-rbdplugin log: 收到删除存储指令,调用rbd删除pv成功
I0310 09:11:52.3905691 rbd_util.go:644] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: delete csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b-temp using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789, pool kubernetes
I0310 09:11:52.3947861 controllerserver.go:947] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b deleting image csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b
I0310 09:11:52.3948151 rbd_util.go:644] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: delete csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b using mon 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789, pool kubernetes
ask to remove image "kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b" with id "11cbf1b83337" from trash
I0310 09:11:53.0877021 omap.go:123] ID: 1598 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b removed omap keys (pool="kubernetes", namespace="", name="csi.volumes.default"): [csi.volume.pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee]
csi如何进行pod rbd存储的create和delete 【[csi]浅聊ceph-csi组件】# 创建操作:挂载rbd存储到pod进行跟踪log[在node节点查看]
$ kubectl create -f raw-block-pod.yaml
$ kubectl logs -f --tail=10 csi-rbdplugin-9tfnm -c csi-rbdplugin
# csi通过rbd进行创建块设备,并映射到宿主机/dev/rbd0
I0310 08:31:22.76992911099 cephcmds.go:63] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b command succeeded: rbd [--id kubernetes -m 172.20.163.52:6789,172.20.163.52:6789,172.20.163.52:6789 --keyfile=***stripped*** map kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b --device-type krbd --options noudev]
I0310 08:31:22.76997211099 nodeserver.go:391] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd image: kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b was successfully mapped at /dev/rbd0# 格式化/dev/rbd0块设备为ext4文件系统并成功挂载给pod
I0310 08:31:22.82327711099 mount_linux.go:376] Checking for issues with fsck on disk: /dev/rbd0
I0310 08:31:22.89490411099 mount_linux.go:477] Attempting to mount disk /dev/rbd0 in ext4 format at /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b
I0310 08:31:22.89496011099 mount_linux.go:183] Mounting cmd (mount) with arguments (-t ext4 -o _netdev,discard,defaults /dev/rbd0 /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b)
I0310 08:31:22.99790911099 resizefs_linux.go:124] ResizeFs.needResize - checking mounted volume /dev/rbd0
I0310 08:31:23.00041211099 resizefs_linux.go:128] Ext size: filesystem size=2147483648, block size=4096
I0310 08:31:23.00043311099 resizefs_linux.go:140] Volume /dev/rbd0: device size=2147483648, filesystem size=2147483648, block size=4096
I0310 08:31:23.00050211099 nodeserver.go:351] ID: 1900 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: successfully mounted volume 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b to stagingTargetPath /var/lib/kubelet/plugins/kubernetes.io/csi/pv/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/globalmount/0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b
# 删除操作:已挂载rbd存储的pod进行删除跟踪log[在node节点查看]
$ kubectl delete pod pod-with-raw-block-volume
$ kubectl logs -f --tail=10 csi-rbdplugin-9tfnm -c csi-rbdplugin
I0310 07:57:38.47700311099 mount_linux.go:294] Unmounting /var/lib/kubelet/pods/b81968d7-1f46-4076-8f90-36c2b1e2ea86/volumes/kubernetes.io~csi/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/mount
# 从pod中umount掉这个卷
I0310 07:57:38.48543311099 nodeserver.go:864] ID: 1862 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b rbd: successfully unbound volume 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b from /var/lib/kubelet/pods/b81968d7-1f46-4076-8f90-36c2b1e2ea86/volumes/kubernetes.io~csi/pvc-a92ceaaf-e400-4a74-a578-1fa0292ebdee/mount
# 从宿主机rbd umap掉块设备
I0310 07:57:38.77723611099 cephcmds.go:63] ID: 1864 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b command succeeded: rbd [unmap kubernetes/csi-vol-b8f108b8-a022-11ec-b2dd-fa163ed7971b --device-type krbd --options noudev]
I0310 07:57:38.77727011099 nodeserver.go:977] ID: 1864 Req-ID: 0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b successfully unmapped volume (0001-0024-4a9e463a-4853-4237-a5c5-9ae9d25bacda-0000000000000005-b8f108b8-a022-11ec-b2dd-fa163ed7971b)
参考文献 [1] ceph.com 作者 202203
[2]良凯尔 作者 202203
推荐阅读
- [k8s]|[k8s] k8s基于csi使用rbd存储
- git平台docsify布署markdown文件
- MIPI|MIPI CSI-2 像素打包格式解析
- [自动化]浅聊ansible的幂等
- docsify搭建你的个人博客
- CSICTF 部分逆向wp
- SCSI命令详解
- QT QGraphicsItem 消除重影 移动重影
- 腾讯_CSIG一面
- Linux挂载iscsi存储的方式