ROOK 07:Ceph CSI

Rook 集成了三个 CSI 驱动程序,分别用于不同的场景:

  • RBD:块存储驱动针对 RWO Pod 访问进行了优化
  • CephFS:文件存储驱动允许针对 RWX 的一个或多个 Pod 访问相同的存储
  • NFS(实验性质):这个文件存储驱动允许创建 NFS 导出,能够被挂载到 Pod,或者直接通过 NFS 客户端被 Kubernetes 集群外部访问

快照

使用快照的前提要求有:

  • Rook 官方支持 v1 snapshots 针对 Kubernetes 1.20+
  • 安装 snapshot controller 和 snapshot v1 CRD
  • 需要 VolumeSnapshotClass 针对卷快照的工作

安装 snapshot controller

卷快照功能依赖于卷快照控制器和卷快照 CRD。
卷快照控制器和 CRD 都独立于任何 CSI 驱动程序。
无论集群上部署的 CSI 驱动程序数量如何,每个集群都必须仅运行一个卷快照控制器实例并安装一组卷快照 CRD。

首先克隆项目到本地:

git clone https://github.com/kubernetes-csi/external-snapshotter.git

安装 Snapshot CRDs:

[vagrant@master01 external-snapshotter]$ kubectl kustomize client/config/crd | kubectl create -f -
customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created

安装 Snapshot Controller:

[vagrant@master01 external-snapshotter]$ kubectl -n kube-system kustomize deploy/kubernetes/snapshot-controller | kubectl create -f -
serviceaccount/snapshot-controller created
role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
deployment.apps/snapshot-controller created

验证:

[vagrant@master01 external-snapshotter]$ kubectl get deployment.apps/snapshot-controller -n kube-system
NAME                  READY   UP-TO-DATE   AVAILABLE   AGE
snapshot-controller   2/2     2            2           55s

安装 CSI 驱动:

[vagrant@master01 external-snapshotter]$ kubectl kustomize deploy/kubernetes/csi-snapshotter | kubectl create -f -
serviceaccount/csi-provisioner created
serviceaccount/csi-snapshotter created
role.rbac.authorization.k8s.io/external-provisioner-cfg created
role.rbac.authorization.k8s.io/external-snapshotter-leaderelection created
clusterrole.rbac.authorization.k8s.io/external-provisioner-runner created
clusterrole.rbac.authorization.k8s.io/external-snapshotter-runner created
rolebinding.rbac.authorization.k8s.io/csi-provisioner-role-cfg created
rolebinding.rbac.authorization.k8s.io/csi-snapshotter-provisioner-role-cfg created
rolebinding.rbac.authorization.k8s.io/external-snapshotter-leaderelection created
clusterrolebinding.rbac.authorization.k8s.io/csi-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-provisioner-role created
clusterrolebinding.rbac.authorization.k8s.io/csi-snapshotter-role created
service/csi-snapshotter created
statefulset.apps/csi-snapshotter created

验证:

[vagrant@master01 external-snapshotter]$ kubectl get statefulset.apps/csi-snapshotter
NAME              READY   AGE
csi-snapshotter   1/1     2m13s

RBD Snapshots

为 RBD 创建快照,首先需要创建对应的 VolumeSnapshotClass 资源。

VolumeSnapshotClass

创建 VolumeSnapshotClass ,在 csi.storage.k8s.io/snapshotter-secret-name 参数中引用为 rbdplugin 和池创建的密钥名称,以对应 Ceph 池名称。

snapshotclass.yaml 示例:

---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-rbdplugin-snapclass
driver: rook-ceph.rbd.csi.ceph.com # csi-provisioner-name
parameters:
  # Specify a string that identifies your cluster. Ceph CSI supports any
  # unique string. When Ceph CSI is deployed by Rook use the Rook namespace,
  # for example "rook-ceph".
  clusterID: rook-ceph # namespace:cluster
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-rbd-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph # namespace:cluster
deletionPolicy: Delete

应用并验证:

[vagrant@master01 rbd]$ kubectl apply -f snapshotclass.yaml
volumesnapshotclass.snapshot.storage.k8s.io/csi-rbdplugin-snapclass created

[vagrant@master01 rbd]$ kubectl get volumesnapshotclass
NAME                      DRIVER                       DELETIONPOLICY   AGE
csi-rbdplugin-snapclass   rook-ceph.rbd.csi.ceph.com   Delete           19s

Volumesnapshot

在快照中,volumeSnapshotClassName 应为之前创建的 VolumeSnapshotClass 的名称。
permanentVolumeClaimName 应该是 RBD CSI 驱动程序已创建的 PVC 的名称。

首先创建一个使用 RBD 的 PVC:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-ceph-block

创建并验证:

[vagrant@master01 rbd]$ kubectl apply -f pvc.yaml -n snapshots
persistentvolumeclaim/rbd-pvc created
[vagrant@master01 rbd]$ kubectl get pvc -n snapshots
NAME      STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
rbd-pvc   Bound    pvc-795c7e81-25bd-4edd-a911-7b36498ddb0a   1Gi        RWO            rook-ceph-block   <unset>                 5s

创建快照:

---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: rbd-pvc-snapshot
spec:
  volumeSnapshotClassName: csi-rbdplugin-snapclass
  source:
    persistentVolumeClaimName: rbd-pvc

创建并验证:

[vagrant@master01 rbd]$ kubectl apply -f snapshot.yaml -n snapshots
volumesnapshot.snapshot.storage.k8s.io/rbd-pvc-snapshot created
[vagrant@master01 rbd]$ kubectl get volumesnapshot -n snapshots
NAME               READYTOUSE   SOURCEPVC   SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS             SNAPSHOTCONTENT                                    CREATIONTIME   AGE
rbd-pvc-snapshot   true         rbd-pvc                             1Gi           csi-rbdplugin-snapclass   snapcontent-1f68225e-ce19-4837-880e-f9bd41710761   6s             8s

还原快照

还原快照是基于快照创建新 PVC 的方式。
pvc-restore.yaml 的参考定义:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-restore
spec:
  storageClassName: rook-ceph-block
  dataSource:
    name: rbd-pvc-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

dataSource 中的 kind 需要定义为 VolumeSnapshot , name 对应的是创建的快照的名称。

应用并验证:

[vagrant@master01 rbd]$ kubectl apply -f pvc-restore.yaml -n snapshots
persistentvolumeclaim/rbd-pvc-restore created
[vagrant@master01 rbd]$ kubectl get pvc -n snapshots
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
rbd-pvc           Bound    pvc-795c7e81-25bd-4edd-a911-7b36498ddb0a   1Gi        RWO            rook-ceph-block   <unset>                 4m10s
rbd-pvc-restore   Bound    pvc-0b02edf3-e472-4391-a5bd-939eaaba978e   1Gi        RWO            rook-ceph-block   <unset>                 6s

CephFS Snapshots

为 CephFS 创建快照,首先同样需要创建对应的 VolumeSnapshotClass 资源。

VolumeSnapshotClass

创建 VolumeSnapshotClass ,在 csi.storage.k8s.io/snapshotter-secret-name 参数中引用为 cephfsplugin 创建的密钥名称。

snapshotclass.yaml 示例:

---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: csi-cephfsplugin-snapclass
driver: rook-ceph.cephfs.csi.ceph.com # csi-provisioner-name
parameters:
  # Specify a string that identifies your cluster. Ceph CSI supports any
  # unique string. When Ceph CSI is deployed by Rook use the Rook namespace,
  # for example "rook-ceph".
  clusterID: rook-ceph # namespace:cluster
  csi.storage.k8s.io/snapshotter-secret-name: rook-csi-cephfs-provisioner
  csi.storage.k8s.io/snapshotter-secret-namespace: rook-ceph # namespace:cluster
deletionPolicy: Delete

应用并检查:

[vagrant@master01 cephfs]$ kubectl apply -f snapshotclass.yaml 
volumesnapshotclass.snapshot.storage.k8s.io/csi-cephfsplugin-snapclass created
[vagrant@master01 cephfs]$ kubectl get volumesnapshotclass
NAME                         DRIVER                          DELETIONPOLICY   AGE
csi-cephfsplugin-snapclass   rook-ceph.cephfs.csi.ceph.com   Delete           6s
csi-rbdplugin-snapclass      rook-ceph.rbd.csi.ceph.com      Delete           55m

Volumesnapshot

在快照中,volumeSnapshotClassName 应为之前创建的 VolumeSnapshotClass 的名称。
permanentVolumeClaimName 应该是 CephFS CSI 驱动程序已创建的 PVC 的名称。

首先创建一个使用 CephFS 的 PVC:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: rook-cephfs

应用并检查:

[vagrant@master01 cephfs]$ kubectl apply -f pvc.yaml -n snapshots
persistentvolumeclaim/cephfs-pvc created
[vagrant@master01 cephfs]$ kubectl get pvc -n snapshots
NAME              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
cephfs-pvc        Bound    pvc-44c32f66-ba12-49e6-862d-42c66b7b0e77   1Gi        RWO            rook-cephfs       <unset>                 8s
rbd-pvc           Bound    pvc-795c7e81-25bd-4edd-a911-7b36498ddb0a   1Gi        RWO            rook-ceph-block   <unset>                 11m
rbd-pvc-restore   Bound    pvc-0b02edf3-e472-4391-a5bd-939eaaba978e   1Gi        RWO            rook-ceph-block   <unset>                 7m32s

创建快照:

---
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshot
metadata:
  name: cephfs-pvc-snapshot
spec:
  volumeSnapshotClassName: csi-cephfsplugin-snapclass
  source:
    persistentVolumeClaimName: cephfs-pvc

应用并检查:

[vagrant@master01 cephfs]$ kubectl apply -f snapshot.yaml -n snapshots
volumesnapshot.snapshot.storage.k8s.io/cephfs-pvc-snapshot created

[vagrant@master01 cephfs]$ kubectl get volumesnapshot -n snapshots
NAME                  READYTOUSE   SOURCEPVC    SOURCESNAPSHOTCONTENT   RESTORESIZE   SNAPSHOTCLASS                SNAPSHOTCONTENT                                    CREATIONTIME   AGE
cephfs-pvc-snapshot   true         cephfs-pvc                           1Gi           csi-cephfsplugin-snapclass   snapcontent-02ca1303-003e-4b3a-a370-50d72f97b49b   11s            12s
rbd-pvc-snapshot      true         rbd-pvc                              1Gi           csi-rbdplugin-snapclass      snapcontent-1f68225e-ce19-4837-880e-f9bd41710761   11m            11m

还原快照

还原快照是基于快照创建新 PVC 的方式。
pvc-restore.yaml 的参考定义:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc-restore
spec:
  storageClassName: rook-cephfs
  dataSource:
    name: cephfs-pvc-snapshot
    kind: VolumeSnapshot
    apiGroup: snapshot.storage.k8s.io
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

应用并检查:

[vagrant@master01 cephfs]$ kubectl apply -f pvc-restore.yaml -n snapshots
persistentvolumeclaim/cephfs-pvc-restore created

[vagrant@master01 cephfs]$ kubectl get pvc -n snapshots
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
cephfs-pvc           Bound    pvc-44c32f66-ba12-49e6-862d-42c66b7b0e77   1Gi        RWO            rook-cephfs       <unset>                 3m11s
cephfs-pvc-restore   Bound    pvc-5b39a7a1-546c-4ec3-bb8d-b91bc00dc397   1Gi        RWX            rook-cephfs       <unset>                 7s
rbd-pvc              Bound    pvc-795c7e81-25bd-4edd-a911-7b36498ddb0a   1Gi        RWO            rook-ceph-block   <unset>                 14m
rbd-pvc-restore      Bound    pvc-0b02edf3-e472-4391-a5bd-939eaaba978e   1Gi        RWO            rook-ceph-block   <unset>                 10m

Volume 克隆

CSI 卷克隆功能添加了对在 dataSource 字段中指定现有 PVC 的支持,以指定用户想要克隆卷。

克隆被定义为现有 Kubernetes 卷的副本,可以像任何标准卷一样使用。唯一的区别是,在配置时,后端设备不是创建“新”空卷,而是创建指定卷的精确副本。

RBD 卷克隆

使用 RBD 卷克隆的前提要求:

  • Kubernetes v.1.16+
  • Ceph-csi 驱动 v3.0.0+

在 pvc-clone 中,dataSource 应该是 RBD CSI 驱动程序已创建的 PVC 的名称。数据源类型应该是 PersistentVolumeClaim,并且存储类应该与源 PVC 相同。

pvc-clone.yaml 示例:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: rbd-pvc-clone
spec:
  storageClassName: rook-ceph-block
  dataSource:
    name: rbd-pvc
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi

应用并检查:

[vagrant@master01 rbd]$ kubectl apply -f pvc-clone.yaml -n snapshots
persistentvolumeclaim/rbd-pvc-clone created
[vagrant@master01 rbd]$ kubectl get pvc -n snapshots
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
cephfs-pvc           Bound    pvc-44c32f66-ba12-49e6-862d-42c66b7b0e77   1Gi        RWO            rook-cephfs       <unset>                 10m
cephfs-pvc-restore   Bound    pvc-5b39a7a1-546c-4ec3-bb8d-b91bc00dc397   1Gi        RWX            rook-cephfs       <unset>                 7m55s
rbd-pvc              Bound    pvc-795c7e81-25bd-4edd-a911-7b36498ddb0a   1Gi        RWO            rook-ceph-block   <unset>                 22m
rbd-pvc-clone        Bound    pvc-fc6d8113-7425-4d04-a08e-a1c5c0e1fced   1Gi        RWO            rook-ceph-block   <unset>                 5s
rbd-pvc-restore      Bound    pvc-0b02edf3-e472-4391-a5bd-939eaaba978e   1Gi        RWO            rook-ceph-block   <unset>                 18m

CephFS 卷克隆

使用 CephFS 卷克隆的前提要求:

  • Kubernetes v.1.16+
  • Ceph-csi 驱动 v3.1.0+

在 pvc-clone 中,dataSource 应该是 CephFS CSI 驱动程序已创建的 PVC 的名称。数据源类型应该是 PersistentVolumeClaim,并且存储类应该与源 PVC 相同。

pvc-clone.yaml 示例:

---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: cephfs-pvc-clone
spec:
  storageClassName: rook-cephfs
  dataSource:
    name: cephfs-pvc
    kind: PersistentVolumeClaim
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 1Gi

应用并检查:

[vagrant@master01 cephfs]$ kubectl apply -f pvc-clone.yaml -n snapshots
persistentvolumeclaim/cephfs-pvc-clone created
[vagrant@master01 cephfs]$ kubectl get pvc -n snapshots
NAME                 STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS      VOLUMEATTRIBUTESCLASS   AGE
cephfs-pvc           Bound    pvc-44c32f66-ba12-49e6-862d-42c66b7b0e77   1Gi        RWO            rook-cephfs       <unset>                 13m
cephfs-pvc-clone     Bound    pvc-3db57ff4-f679-402b-a825-1e6db00196dc   1Gi        RWX            rook-cephfs       <unset>                 5s
cephfs-pvc-restore   Bound    pvc-5b39a7a1-546c-4ec3-bb8d-b91bc00dc397   1Gi        RWX            rook-cephfs       <unset>                 10m
rbd-pvc              Bound    pvc-795c7e81-25bd-4edd-a911-7b36498ddb0a   1Gi        RWO            rook-ceph-block   <unset>                 24m
rbd-pvc-clone        Bound    pvc-fc6d8113-7425-4d04-a08e-a1c5c0e1fced   1Gi        RWO            rook-ceph-block   <unset>                 2m10s
rbd-pvc-restore      Bound    pvc-0b02edf3-e472-4391-a5bd-939eaaba978e   1Gi        RWO            rook-ceph-block   <unset>                 20m

发表评论

您的电子邮箱地址不会被公开。 必填项已用 * 标注

滚动至顶部