Overview

Ceph CSI plugins implement an interface between CSI enabled Container Orchestrator (CO) and Ceph cluster. It allows dynamically provisioning Ceph volumes and attaching them to workloads.

Ceph-CSI features and available versions.

Using an existing Ceph cluster for persistent storage

  • Install the ceph-common package same as ceph-cluster.
1
2
3
4
5
6
7
8
- name: update apt cache
apt:
update_cache=yes

- name: install ceph-common
apt:
name: ceph-common
state: present

NOTE: The ceph-common library must be installed on all nodes.

  • Create the keyring for the client: (On Ceph Node)
1
$ ceph auth get-or-create client.qemu mon 'allow *' osd 'allow rwx pool=rbd' -o /etc/ceph/ceph.client.qemu.keyring
  • Convert the keyring to base64: (On Ceph Node)
1
$ grep key /etc/ceph/ceph.client.qemu.keyring | awk '{printf "%s", $NF}' | base64

NOTE: This base64 key is generated on one of the Ceph MON nodes using the ceph auth get-key client.admin | base64 command, then copying the output and pasting it as the secret key’s value.

  • Checked userId and secret key in userSecretName: (On client Node)
1
$ rbd ls -m <ceph-monitor-addrs> -p <your-pool> --id <userId> --key=<ceph secret key of userId>

Create StorageClass(Dynamic Volume Provisioning) using Ceph RBD

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
apiVersion: v1
kind: Namespace
metadata:
name: elk
---
# define admin secret on cluster level, create PV
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-admin
namespace: kube-system
data:
key: QVFEQUNWWmRmTUxBSkJBQXlSV88rUm11RzJSb0J2Tk9SVllSaGc9PQ==
---
# define user secret on namespace level, create PVC
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-qemu
namespace: elk
data:
key: QVFCNldYTmRtZ29iREJBQSt4dXorNHp0Wi33RWluR1J4U1hWcnc9PQ==
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd
provisioner: ceph.com/rbd
parameters:
monitors: 192.168.0.2:6789,192.168.0.3:6789
pool: rbd
adminId: admin
adminSecretNamespace: kube-system
adminSecretName: ceph-secret-admin
userId: qemu
userSecretName: ceph-secret-qemu
userSecretNamespace: elk
reclaimPolicy: Retain
allowVolumeExpansion: true
volumeBindingMode: Immediate
---
# StatefulSet
......
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ceph-pvc
labels:
usage: elk
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: ceph-rbd

Set default class:

1
2
3
4
$ kubectl patch storageclass ceph-rbd -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
$ kubectl get sc
NAME PROVISIONER AGE
ceph-rbd (default) ceph.com/rbd 10m

Update a single key in a secret:

1
2
3
4
echo -n 'Password' | base64
UGFzc3cwcmQ=

kubectl patch secret test-secret -p='{"data":{"foo": "UGFzc3cwcmQ="}}' -v=1

bug & fix:

1
Warning ProvisioningFailed 31s (x16 over 19m) persistentvolume-controller Failed to provision volume with StorageClass "ceph-elk": failed to create rbd image: executable file not found in $PATH, command output:

Please check that rbac/deployment.yaml quay.io/external_storage/rbd-provisioner:latest image has the same Ceph version installed as your Ceph cluster. You can check it like this on any machine running docker:

1
2
3
4
5
ceph-cluster:~$ ceph version
ceph version 12.2.12 (1436006594665279fe734b4c15d7e08c13ebd777) luminous (stable)

$ docker history quay.io/external_storage/rbd-provisioner:v1.0.0-k8s1.10 | grep CEPH_VERSION
<missing> 15 months ago /bin/sh -c #(nop) ENV CEPH_VERSION=luminous 0B

Create Static PV using Ceph RBD

1
ceph-cluster:~$ rbd create vmd0 -s 64G
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
apiVersion: v1
kind: Namespace
metadata:
name: elk
---
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret-qemu
namespace: elk
data:
key: QVFCNldYTmRtZ29iREJBQSt4dXorNHp0Wi33RWluR1J4U1hWcnc9PQ==
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: ceph-elk-pv0
namespace: elk
spec:
capacity:
storage: 64Gi
# The 'accessModes' are used as labels to match a PV and a PVC. They currently do not define any form of access control. All block storage is defined to be single user (non-
shared storage).
accessModes:
- ReadWriteOnce
rbd:
monitors:
- 192.168.0.2:6789
- 192.168.0.3:6789
pool: rbd
# The 'vmd0' must be created on the Ceph cluster.
image: vmd0
user: admin
secretRef:
name: ceph-secret-qemu
fsType: ext4
readOnly: false
persistentVolumeReclaimPolicy: Retain
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ceph-elk-claim0
namespace: elk
spec:
# The 'accessModes' do not enforce access rights but instead act as labels to match a PV to a PVC.
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 64Gi
---
# pod
......
volumes:
- name: data
persistentVolumeClaim:
claimName: ceph-elk-claim0