码迷,mamicode.com
首页 > Web开发 > 详细

第7章:Kubernetes存储

时间:2020-08-21 16:39:23      阅读:107      评论:0      收藏:0      [点我收藏+]

标签:configMap   round   ica   and   部署过程   efi   -o   dock   apply   

Kubernetes存储

1.为什么需要存储卷?

容器部署过程中一般有以下三种数据:
·启动时需要的初始数据,可以是配置文件
·启动过程中产生的临时数据,该临时数据需要多个容器间共享
·启动过程中产生的持久化数据

技术图片


2.数据卷概述

Kubernetes 中的 Volume提供了在容器中挂载外部存储的能力
Pod需要设置卷来源( spec.volume)和挂载点( spec.containers.volumeMounts)两个信息后才可以使用相应的 Volume

官方搜索查看支持的类型

技术图片
    awsElasticBlockStore
    azureDisk
    azureFile
    cephfs
    cinder
    configMap
    csi
    downwardAPI
    emptyDir
    fc (fibre channel)
    flexVolume
    flocker
    gcePersistentDisk
    gitRepo (deprecated)
    glusterfs
    hostPath
    iscsi
    local
    nfs
    persistentVolumeClaim
    projected
    portworxVolume
    quobyte
    rbd
    scaleIO
    secret
    storageos
    vsphereVolume
k8s支持的存储类型

简单的分类:
1、本地,例如emptyDir、hostPath
2、网络,例如nfs、cephfs、glusterfs
3、公有云,例如azureDisk、awsElasticBlockStore
4、k8s资源,例如secret、configMap


3.临时存储卷,节点存储卷,网络存储卷


临时存储卷:emptyDir


创建一个空卷,挂载到Pod中的容器。Pod删除该卷也会被删除。
应用场景:Pod中容器之间数据共享


emptydir默认工作目录:
/var/lib/kubelet/pods/<pod-id>/volumes/kubernetes.io~empty-dir


什么样的适合在pod中运行多个容器?

{} 空值

[root@k8s-m1 chp7]# cat emptyDir.yml
apiVersion: v1
kind: Pod
metadata:
  name: emptydir
spec:
  containers:
  - name: write
    image: centos
    command: ["bash","-c","for i in {1..100};do echo $i >> /data/hello;sleep 1;done"]
    volumeMounts:
    - name: data
      mountPath: /data
  - name: read
    image: centos
    command: ["bash","-c","tail -f /data/hello"]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    emptyDir: {}
[root@k8s-m1 chp7]# kubectl apply -f emptyDir.yml
pod/emptydir created
[root@k8s-m1 chp7]# kubectl get po -o wide
NAME                              READY   STATUS    RESTARTS   AGE     IP               NODE     NOMINATED NODE   READINESS GATES
emptydir                          2/2     Running   1          116s    10.244.111.203   k8s-n2   <none>           <none>

[root@k8s-n2 data]# docker ps |grep emptydir
cbaf1b92b4a8        centos                                              "bash -c ‘for i in {…"   About a minute ago   Up About a minute                       k8s_write_emptydir_default_df40c32a-9f0a-44b7-9c17-89c9e9725da2_3
bce0f2607620        centos                                              "bash -c ‘tail -f /d…"   7 minutes ago        Up 7 minutes                            k8s_read_emptydir_default_df40c32a-9f0a-44b7-9c17-89c9e9725da2_0
0b804b8db60f        registry.aliyuncs.com/google_containers/pause:3.2   "/pause"                 7 minutes ago        Up 7 minutes                            k8s_POD_emptydir_default_df40c32a-9f0a-44b7-9c17-89c9e9725da2_0


[root@k8s-n2 data]# pwd
/var/lib/kubelet/pods/df40c32a-9f0a-44b7-9c17-89c9e9725da2/volumes/kubernetes.io~empty-dir/data


节点存储卷:hostPath


挂载Node文件系统上文件或者目录到Pod中的容器。
应用场景:Pod中容器需要访问宿主机文件

[root@k8s-m1 chp7]# cat hostPath.yml
apiVersion: v1
kind: Pod
metadata:
  name: host-path
spec:
  containers:
  - name: centos
    image: centos
    command: ["bash","-c","sleep 36000"]
    volumeMounts:
    - name: data
      mountPath: /data
  volumes:
  - name: data
    hostPath:
      path: /tmp
      type: Directory

[root@k8s-m1 chp7]# kubectl apply -f hostPath.yml
pod/host-path created
[root@k8s-m1 chp7]# kubectl exec host-path -it bash
kubectl exec [POD] [COMMAND] is DEPRECATED and will be removed in a future version. Use kubectl kubectl exec [POD] -- [COMMAND] instead.

[root@host-path data]# pwd
/data
[root@host-path data]# touch test.txt
[root@k8s-m1 ~]# ls -l /tmp/test.txt
-rw-r--r--. 1 root root 5 8月  18 22:25 /tmp/test.txt


网络存储卷:NFS

yum install nfs-utils -y
[root@k8s-n2 ~]# mkdir /nfs/k8s -p
[root@k8s-n2 ~]# vim /etc/exports
[root@k8s-n2 ~]# cat /etc/exports
/nfs/k8s 10.0.0.0/24(rw,no_root_squash)
# no_root_squash:当登录NFS主机使用共享目录的使用者是root时,其权限将被转换成为匿名使用者,通常它的UID与GID都会变成nobody身份。

[root@k8s-n2 ~]# systemctl restart nfs
[root@k8s-n2 ~]# systemctl enable nfs
Created symlink from /etc/systemd/system/multi-user.target.wants/nfs-server.service to /usr/lib/systemd/system/nfs-server.service.
[root@k8s-n2 ~]#

# 测试
[root@k8s-n1 ~]# mount -t nfs 10.0.0.25:/nfs/k8s /mnt/

[root@k8s-n1 ~]# df -h |grep nfs
10.0.0.25:/nfs/k8s              26G  5.8G   21G   23% /mnt

查看nfs共享目录:
[root@k8s-n2 ~]# showmount -e
Export list for k8s-n2:
/nfs/k8s 10.0.0.0/24

创建应用
[root@k8s-m1 chp7]# cat nfs-deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: nfs-nginx-deploy
spec:
  selector:
    matchLabels:
      app: nfs-nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: nfs-nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: wwwroot
        nfs:
          server: 10.0.0.25
          path: /nfs/k8s

[root@k8s-m1 chp7]# kubectl apply -f nfs-deploy.yml
[root@k8s-m1 chp7]# kubectl get pod -o wide|grep nfs
nfs-nginx-deploy-848f4597c9-658ws   1/1     Running            0          2m33s   10.244.111.205   k8s-n2   <none>           <none>
nfs-nginx-deploy-848f4597c9-bzl5w   1/1     Running            0          2m33s   10.244.111.207   k8s-n2   <none>           <none>
nfs-nginx-deploy-848f4597c9-wz422   1/1     Running            0          2m33s   10.244.111.208   k8s-n2   <none>           <none>

在本地创建index 页面,然后在容器中也可以看到文件

[root@k8s-n2 ~]# echo "hello world" >/nfs/k8s/index.html
[root@k8s-m1 chp7]# curl 10.244.111.205
hello world

[root@k8s-m1 chp7]# kubectl exec nfs-nginx-deploy-848f4597c9-wz422 -it -- bash
root@nfs-nginx-deploy-848f4597c9-wz422:/# mount|grep k8s
10.0.0.25:/nfs/k8s on /usr/share/nginx/html type nfs4 (rw,relatime,vers=4.1,rsize=524288,wsize=524288,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.25,local_lock=none,addr=10.0.0.25)




4.持久卷概述

技术图片


技术图片
[root@k8s-m1 chp7]# cat pv.yml
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0001
spec:
  capacity:
    storage: 5Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /nfs/k8s/pv0001
    server: 10.0.0.25
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0002
spec:
  capacity:
    storage: 15Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /nfs/k8s/pv0002
    server: 10.0.0.25
---
apiVersion: v1
kind: PersistentVolume
metadata:
  name: pv0003
spec:
  capacity:
    storage: 30Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /nfs/k8s/pv0003
    server: 10.0.0.25
创建pv卷
[root@k8s-m1 chp7]# kubectl apply -f pv.yml
persistentvolume/pv0001 created
persistentvolume/pv0002 created
persistentvolume/pv0003 created

[root@k8s-m1 chp7]# kubectl get pv
NAME     CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM   STORAGECLASS   REASON   AGE
pv0001   5Gi        RWX            Recycle          Available                                   53s
pv0002   15Gi       RWX            Recycle          Available                                   53s
pv0003   30Gi       RWX            Recycle          Available                                   53s


技术图片
[root@k8s-m1 chp7]# cat pvc-deploy.yml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: pvc-ngnix
spec:
  selector:
    matchLabels:
      app: pvc-nginx
  replicas: 3
  template:
    metadata:
      labels:
        app: pvc-nginx
    spec:
      containers:
      - name: nginx
        image: nginx
        volumeMounts:
        - name: wwwroot
          mountPath: /usr/share/nginx/html
        ports:
        - containerPort: 80
      volumes:
      - name: wwwroot
        persistentVolumeClaim:
          claimName: my-pvc

---

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: my-pvc
spec:
  accessModes:
    - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
创建应用


[root@k8s-m1 chp7]# kubectl apply -f pvc-deploy.yml
deployment.apps/pvc-ngnix unchanged
persistentvolumeclaim/my-pvc created

[root@k8s-m1 chp7]# kubectl get pvc
NAME     STATUS   VOLUME   CAPACITY   ACCESS MODES   STORAGECLASS   AGE
my-pvc   Bound    pv0001   5Gi        RWX                           12m

[root@k8s-n2 pv0001]# echo "hello pvc" >index.html
[root@k8s-m1 chp7]# curl 10.244.111.209
hello pvc

AccessModes(访问模式):
AccessModes 是用来对 PV 进行访问模式的设置,用于描述用户应用对存储资源的访问权限,访问权限包括下面几种方式:
ReadWriteOnce(RWO):读写权限,但是只能被单个节点挂载
ReadOnlyMany(ROX):只读权限,可以被多个节点挂载
ReadWriteMany(RWX):读写权限,可以被多个节点挂载






5.PV静态供给
6.PV动态供给
7.案例:应用程序使用持久卷存储数据
8.有状态应用部署:Statefulset控制器
9.应用程序配置文件存储:Config Ma
10.敏感数据存储:Secret

第7章:Kubernetes存储

标签:configMap   round   ica   and   部署过程   efi   -o   dock   apply   

原文地址:https://www.cnblogs.com/wenyule/p/13526872.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!