码迷,mamicode.com
首页 > 其他好文 > 详细

k8s中部署基于nfs的StorageClass

时间:2020-02-21 14:44:52      阅读:61      评论:0      收藏:0      [点我收藏+]

标签:image   version   准备   archive   yml   clu   roo   存储   ram   

k8s中部署基于nfs的StorageClass

? storageclass相当于是一个动态的存储,即每个pod需要多少容量,直接在配置资源清单中声明即可;但是nfs默认是不支持storageclass动态存储的.

? 总结一下就是:

? 1. 平时使用过程中,如果是静态的存储,那么过程是先准备好存储,然后基于存储创建PV;然后在创建PVC,根据容量他们会找对应的PV

? 2. 使用动态存储,那么就是先准备好存储,然后直接创建PVC,storageclass会根据要求的大小自动创建PV

首先安装nfs-server并且在所有node主机安装nfs-utils

yum -y install nfs-server
yum -y install nfs-utils
#修改/etc/exports
/home/k8sdata *(rw,sync,all_squash)
#这是使nfs支持storageclass的开源插件
git clone https://github.com/kubernetes-incubator/external-storage.git
cd external-storage/nfs-client/deploy
  1. 首先授权
root@k8s-master storage]# cat rbac.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
---
kind: ClusterRole
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: nfs-client-provisioner-runner
rules:
  - apiGroups: [""]
    resources: ["persistentvolumes"]
    verbs: ["get", "list", "watch", "create", "delete"]
  - apiGroups: [""]
    resources: ["persistentvolumeclaims"]
    verbs: ["get", "list", "watch", "update"]
  - apiGroups: ["storage.k8s.io"]
    resources: ["storageclasses"]
    verbs: ["get", "list", "watch"]
  - apiGroups: [""]
    resources: ["events"]
    verbs: ["create", "update", "patch"]
---
kind: ClusterRoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: run-nfs-client-provisioner
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: ClusterRole
  name: nfs-client-provisioner-runner
  apiGroup: rbac.authorization.k8s.io
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
rules:
  - apiGroups: [""]
    resources: ["endpoints"]
    verbs: ["get", "list", "watch", "create", "update", "patch"]
---
kind: RoleBinding
apiVersion: rbac.authorization.k8s.io/v1
metadata:
  name: leader-locking-nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
subjects:
  - kind: ServiceAccount
    name: nfs-client-provisioner
    # replace with namespace where provisioner is deployed
    namespace: default
roleRef:
  kind: Role
  name: leader-locking-nfs-client-provisioner
  apiGroup: rbac.authorization.k8s.io
  1. 部署插件,这个插件我想我们可以这么理解

nfs本身不支持动态存储,而我们找了一个能够转换的pod,把nfs存储全部挂载到了它身上,然后,由它再给用户提供storageclass

[root@k8s-master storage]# cat deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-client-provisioner
  labels:
    app: nfs-client-provisioner
  # replace with namespace where provisioner is deployed
  namespace: default
spec:
  replicas: 1
  selector:
    matchLabels:
      app: nfs-client-provisioner
  strategy:
    type: Recreate
  selector:
    matchLabels:
      app: nfs-client-provisioner
  template:
    metadata:
      labels:
        app: nfs-client-provisioner
    spec:
      serviceAccountName: nfs-client-provisioner
      containers:
        - name: nfs-client-provisioner
          image: registry.cn-hangzhou.aliyuncs.com/joink8s/nfs-client-provisioner:latest1
          volumeMounts:
            - name: nfs-client-root
              mountPath: /persistentvolumes
          env:
            - name: PROVISIONER_NAME
              value: joink8s
            - name: NFS_SERVER
              value: 10.10.11.182
            - name: NFS_PATH
              value: /home/k8sdata
      volumes:
        - name: nfs-client-root
          nfs:
            server: 10.10.11.182
            path: /home/k8sdata
[root@k8s-master storage]# cat class.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: managed-nfs-storage
provisioner: joink8s
#和上一个yaml文件定义的要对应
parameters:
  archiveOnDelete: "false"
kubectl apply -f rbac.yaml
kubectl apply -f deployment.yaml
kubectl apply -f class.yaml

然后就可以了,直接创建pod,在pod资源清单制定pvc就OK了

示例:

kind: Pod
class="hljs shell">apiVersion: v1
metadata:
  name: test-pod
spec:
  containers:
  - name: test-pod
    image: gcr.io/google_containers/busybox:1.24
    command:
      - "/bin/sh"
    args:
      - "-c"
      - "touch /mnt/SUCCESS && exit 0 || exit 1"
    volumeMounts:
      - name: nfs-pvc
        mountPath: "/mnt"
  restartPolicy: "Never"
  volumes:
    - name: nfs-pvc
      persistentVolumeClaim:
        claimName: test-claim  #pvc名字

k8s中部署基于nfs的StorageClass

标签:image   version   准备   archive   yml   clu   roo   存储   ram   

原文地址:https://www.cnblogs.com/jojoword/p/12341128.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!