安装ceph-common
所有kubernetes节点安装ceph-common包,包括master节点,且版本要和连接的ceph集群版本一致(实测版本为ceph:10.2.0 k8s:1.9.1)
yum install -y ceph-common
生成 Ceph secret
使用 Ceph 管理员提供给你的 ceph.client.admin.keyring 文件,我们将它放在了 /etc/ceph 目录下,用来生成 secret(测试时直接拷贝的ceph集群/etc/ceph目录)。
grep key /etc/ceph/ceph.client.admin.keyring |awk '{printf "%s", $NF}'|base64
必须使用base64加密
将获得加密后的 key:QVFDWDA2aFo5TG5TQnhBQVl1b0lUL2V3YlRSaEtwVEhPWkxvUlE9PQ==,我们将在后面用到。
创建 Ceph secret
创建 ceph-secret.yaml 文件内容为:
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
namespace: comall
type: "kubernetes.io/rbd"
data:
# key: AQDusT1aFpiKFxAAHotuwTobQTThOznK2iXR6g==
key: QVFEdXNUMWFGcGlLRnhBQUhvdHV3VG9iUVRUaE96bksyaVhSNmc9PQo=
创建 StorageClass
创建 ceph-class.yaml 文件内容为: 格式可以参考官网介绍:https://kubernetes.io/docs/concepts/storage/storage-classes/#ceph-rbd
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: ceph-rbd
provisioner: kubernetes.io/rbd
parameters:
monitors: 10.90.24.234:6789,10.90.24.129:6789,10.90.24.141:6789
adminId: admin
adminSecretName: ceph-secret
adminSecretNamespace: comall
pool: ceshi #此处默认是rbd池,生产上建议自己创建存储池隔离
userId: admin
userSecretName: ceph-secret
# 以下指定rbd创建时镜像和文件系统的参数
# fsType: xfs
# imageFormat: "2"
# imageFeatures: "layering"
创建pvc
创建ceph-pvc.yaml 文件内容为:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ceph-pvc2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
storageClassName: ceph-rbd
创建pod
创建pod,使用刚刚创建的pvc,文件内容如下:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: ceph-nginx
namespace: comall
spec:
replicas: 1
template:
metadata:
labels:
name: ceph-nginx
spec:
containers:
- name: nginx
image: nginx:alpine
imagePullPolicy: IfNotPresent
ports:
- containerPort: 80
volumeMounts:
- name: ceph-rbd-volume
mountPath: "/usr/share/nginx/html"
volumes:
- name: ceph-rbd-volume
persistentVolumeClaim:
claimName: ceph-pvc
查看pod中的挂载:
[root@test-server-4 ceph]# kubectl get pods -n comall
NAME READY STATUS RESTARTS AGE
ceph-nginx-6fd689788f-4stcv 1/1 Running 0 28m
my-nginx-8c56b8777-jjldx 1/1 Running 0 5d
whoami-687bd5fd4-bdrf2 1/1 Running 0 5d
whoami-687bd5fd4-hgknm 1/1 Running 0 5d
whoami-687bd5fd4-r7hj9 1/1 Running 0 5d
zk1-774f7f9bf6-f65bv 1/1 Running 0 1d
zk2-5578b64d4-kp8qs 1/1 Running 0 1d
zk3-78976db5b6-vbksv 1/1 Running 0 1d
目录/usr/share/nginx/html 就是挂载的ceph中的rbd存储,这里自动格式化成了ext4格式,可以在创建storageclass时进行指定:
[root@test-server-4 ceph]# kubectl -n comall exec -it ceph-nginx-6fd689788f-4stcv sh
/ # df -Th
Filesystem Type Size Used Available Use% Mounted on
overlay overlay 926.6G 26.5G 900.0G 3% /
tmpfs tmpfs 64.0M 0 64.0M 0% /dev
tmpfs tmpfs 15.6G 0 15.6G 0% /sys/fs/cgroup
/dev/mapper/centos-root
xfs 926.6G 26.5G 900.0G 3% /dev/termination-log
/dev/mapper/centos-root
xfs 926.6G 26.5G 900.0G 3% /etc/resolv.conf
/dev/mapper/centos-root
xfs 926.6G 26.5G 900.0G 3% /etc/hostname
/dev/mapper/centos-root
xfs 926.6G 26.5G 900.0G 3% /etc/hosts
shm tmpfs 64.0M 0 64.0M 0% /dev/shm
/dev/rbd3 ext4 1.9G 6.0M 1.8G 0% /usr/share/nginx/html
tmpfs tmpfs 15.6G 12.0K 15.6G 0% /var/run/secrets/kubernetes.io/serviceaccount
tmpfs tmpfs 64.0M 0 64.0M 0% /proc/kcore
tmpfs tmpfs 64.0M 0 64.0M 0% /proc/timer_list
tmpfs tmpfs 64.0M 0 64.0M 0% /proc/timer_stats
tmpfs tmpfs 64.0M 0 64.0M 0% /proc/sched_debug
tmpfs tmpfs 15.6G 0 15.6G 0% /proc/scsi
tmpfs tmpfs 15.6G 0 15.6G 0% /sys/firmware
本文链接:https://zhusl.com/post/storageclass-CEPH-rbd.html,参与评论 »
--EOF--
发表于 2018-04-04 09:59:00,并被添加「kubernetes、ceph、storageclass」标签。
本站使用「署名 4.0 国际」创作共享协议,转载请注明作者及原网址。更多说明 »
提醒:本文最后更新于 2422 天前,文中所描述的信息可能已发生改变,请谨慎使用。
专题「KUBERNETES」的其它文章 »
- ubuntu16.04-kubernetes+arena搭建机器学习环境 (Sep 01, 2018)
- 安装文档(1.9参考) (Apr 01, 2018)
- 使用kubeadm安装Kubernetes 1.9 (Feb 07, 2018)
Comments