码迷,mamicode.com
首页 > 其他好文 > 详细

k8s部署有状态服务zookeeper示例

时间:2020-01-10 18:52:27      阅读:248      评论:0      收藏:0      [点我收藏+]

标签:ltm   metadata   awk   无法   and   解析   spec   配置文件   ttl   

我们先不考虑配置文件的前提下:

apiVersion: apps/v1
kind: StatefulSet   #####固定hostname,有状态的服务使用这个 statefalset有个问题,就是如果那个pod不是running状态,这个主机名是无法解析的,这样就构成了一个死循环,我sed替换主机名的时候由于pod还不是running状态,她只能获取自己的主机名。无法获取别人的主机名,所以在zookeeper中换成了换成了ip
metadata:
  name: zookeeper
spec:
  serviceName: zookeeper  ####所以生成的3个pod的名字叫zookeeper-0,zookeeper-1,zookeeper-2
  replicas: 3
  revisionHistoryLimit: 10
  selector:  ##statefulset必须有的
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      volumes:
      - name: volume-logs
        hostPath: 
          path: /var/log/zookeeper
      containers:
      - name: zookeeper
        image: harbor.test.com/middleware/zookeeper:3.4.10
        imagePullPolicy: IfNotPresent
        livenessProbe:
          tcpSocket:
            port: 2181
          initialDelaySeconds: 30
          timeoutSeconds: 3
          periodSeconds: 5
          successThreshold: 1
          failureThreshold: 2
        ports:
        - containerPort: 2181
          protocol: TCP
        - containerPort: 2888
          protocol: TCP
        - containerPort: 3888
          protocol: TCP
        env:
        - name: SERVICE_NAME
          value: "zookeeper"
        - name: MY_POD_NAME  #声明k8s自带的变量,这样在pod创建之后,在其中可以直接echo ${MY_POD_NAME}得到hostname
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: volume-logs
          mountPath: /var/log/zookeeper
      nodeSelector:
        zookeeper: enable
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper #我的cluster名字为这个,在任意一个生成的pod中可以ping zookeeper,相当于zookeeper为生成的3个pod的cluster_name,会发现每次ping出的地址不一定相同,nslookup zookeeper得到的是3个pod的pod ip,共3条记录。
spec:
  ports:
  - port: 2181
  selector:
    app: zookeeper 
  clusterIP: None  #此句必须加上
[root@host5 src]# kubectl get pod --all-namespaces -o wide
NAMESPACE     NAME                                       READY   STATUS    RESTARTS   AGE     IP               NODE    NOMINATED NODE   READINESS GATES
default       zookeeper-0                                1/1     Running   0          12m     192.168.55.69    host3   <none>           <none>
default       zookeeper-1                                1/1     Running   0          12m     192.168.31.93    host4   <none>           <none>
default       zookeeper-2                                1/1     Running   0          12m     192.168.55.70    host3   <none>           <none>
bash-4.3# nslookup zookeeper
nslookup: can‘t resolve ‘(null)‘: Name does not resolve

Name:      zookeeper
Address 1: 192.168.55.70 zookeeper-2.zookeeper.default.svc.cluster.local
Address 2: 192.168.55.69 zookeeper-0.zookeeper.default.svc.cluster.local
Address 3: 192.168.31.93 zookeeper-1.zookeeper.default.svc.cluster.local
bash-4.3# ping zookeeper-0.zookeeper
PING zookeeper-0.zookeeper (192.168.55.69): 56 data bytes
64 bytes from 192.168.55.69: seq=0 ttl=63 time=0.109 ms
64 bytes from 192.168.55.69: seq=1 ttl=63 time=0.212 ms
^C
--- zookeeper-0.zookeeper ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.109/0.160/0.212 ms
bash-4.3# ping zookeeper-1.zookeeper
PING zookeeper-1.zookeeper (192.168.31.93): 56 data bytes
64 bytes from 192.168.31.93: seq=0 ttl=62 time=0.535 ms
64 bytes from 192.168.31.93: seq=1 ttl=62 time=0.507 ms
64 bytes from 192.168.31.93: seq=2 ttl=62 time=0.587 ms
^C
--- zookeeper-1.zookeeper ping statistics ---
3 packets transmitted, 3 packets received, 0% packet loss
round-trip min/avg/max = 0.507/0.543/0.587 ms
bash-4.3# ping zookeeper-2.zookeeper
PING zookeeper-2.zookeeper (192.168.55.70): 56 data bytes
64 bytes from 192.168.55.70: seq=0 ttl=64 time=0.058 ms
64 bytes from 192.168.55.70: seq=1 ttl=64 time=0.081 ms
^C
--- zookeeper-2.zookeeper ping statistics ---
2 packets transmitted, 2 packets received, 0% packet loss
round-trip min/avg/max = 0.058/0.069/0.081 ms

k8s自带的常用变量如下:

env:
        - name: MY_NODE_NAME
          valueFrom:
            fieldRef:
              fieldPath: spec.nodeName
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        - name: MY_POD_NAMESPACE
          valueFrom:
            fieldRef:
              fieldPath: metadata.namespace
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: MY_POD_SERVICE_ACCOUNT
          valueFrom:
            fieldRef:
              fieldPath: spec.serviceAccountName
spec.nodeName : pod所在节点的IP、宿主主机IP

status.podIP :pod IP

我们再看配置文件:

[root@docker06 conf]# cat zoo.cfg |grep -v ^#|grep -v ^$
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress= docker06
server.1=docker05:2888:3888
server.2=docker06:2888:3888
server.3=docker04:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

我们需要修改成形如:

tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress= docker06  #下面的3行是固定的,主要是这行需要修改成本机的MY_POD_IP,我们可以用configmap挂载配置文件,然后在pod里面用sed替换掉这行配置
server.1=zookeeper-0.zookeeper:2888:3888
server.2=zookeeper-1.zookeeper:2888:3888
server.3=zookeeper-2.zookeeper:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

参考如下这种方式:

先将配置文件通过configmap挂载进pod里面,如fix-ip.sh

apiVersion: v1
kind: ConfigMap
metadata:
  name: redis-cluster
  labels:
    app: redis-cluster
data:
  fix-ip.sh: |
    #!/bin/sh
    CLUSTER_CONFIG="/var/lib/redis/nodes.conf"
    if [ -f ${CLUSTER_CONFIG} ]; then
      if [ -z "${POD_IP}" ]; then
        echo "Unable to determine Pod IP address!"
        exit 1
      fi
      echo "Updating my IP to ${POD_IP} in ${CLUSTER_CONFIG}"
      sed -i.bak -e "/myself/s/[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}\.[0-9]\{1,3\}/${POD_IP}/" ${CLUSTER_CONFIG}
    fi
    exec "$@"
  redis.conf: |+
    cluster-enabled yes
    cluster-require-full-coverage no
    cluster-node-timeout 15000
    cluster-config-file /var/lib/redis/nodes.conf
    cluster-migration-barrier 1
    appendonly yes
    protected-mode no

然后在启动pod的时候执行这个脚本:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: redis-cluster
  labels:
    app: redis-cluster
spec:
  serviceName: redis-cluster
  replicas: 6
  selector:
    matchLabels:
      app: redis-cluster
  template:
    metadata:
      labels:
        app: redis-cluster
    spec:
      containers:
      - name: redis
        image: 10.11.100.85/library/redis
        ports:
        - containerPort: 6379
          name: client
        - containerPort: 16379
          name: gossip
        command: ["/etc/redis/fix-ip.sh", "redis-server", "/etc/redis/redis.conf"]  #此处先执行了那个脚本,然后启动的redis
        readinessProbe:
          exec:
            command:
            - sh
            - -c
            - "redis-cli -h $(hostname) ping"
          initialDelaySeconds: 15
          timeoutSeconds: 5
        livenessProbe:
          exec:
            command:
            - sh
            - -c
            - "redis-cli -h $(hostname) ping"
          initialDelaySeconds: 20
          periodSeconds: 3
        env:
        - name: POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        volumeMounts:
        - name: conf
          mountPath: /etc/redis
          readOnly: false
        - name: data
          mountPath: /var/lib/redis
          readOnly: false
      volumes:
      - name: conf
        configMap:
          name: redis-cluster
          defaultMode: 0755
#          items:
#          - key: redis.conf
#            path: redis.conf
#          - key: fix-ip.sh
#            path: fix-ip.sh
  volumeClaimTemplates:
  - metadata:
      name: data
      labels:
        name: redis-cluster
    spec:
      accessModes: [ "ReadWriteOnce" ]
      resources:
        requests:
          storage: 150Mi

注意:通过configmap生成的配置文件为只读,无法通过sed修改,可以通过挂载到临时目录,然后拷过去之后sed,但是这样也存在一个问题,就是你动态修改了configmap,只会改变临时目录里的文件,而不会改变考过去的文件

实际生产环境的配置:

1.重新制作image

[root@host4 zookeeper]# ll
总用量 4
drwxr-xr-x 2 root root  45 5月  24 15:48 conf
-rw-r--r-- 1 root root 143 5月  23 06:19 Dockerfile
drwxr-xr-x 2 root root  20 5月  24 15:48 scripts

[root@host4 conf]# cd conf
[root@host4 conf]# ll
总用量 8
-rw-r--r-- 1 root root 1503 5月  23 04:15 log4j.properties
-rw-r--r-- 1 root root  324 5月  24 15:48 zoo.cfg

[root@host4 conf]# cat zoo.cfg 
tickTime=2000
initLimit=10
syncLimit=5
dataDir=/data
clientPort=2181
clientPortAddress=PODIP #此处用ip,下面用主机名,原因见本文上面
server.1=zookeeper-0.zookeeper:2888:3888
server.2=zookeeper-1.zookeeper:2888:3888
server.3=zookeeper-2.zookeeper:2888:3888
snapCount=10000
leaderServes=yes
autopurge.snapRetainCount=3
autopurge.purgeInterval=2
maxClientCnxns=1000

[root@host4 conf]# cd ../scripts/
[root@host4 scripts]# ll
总用量 4
-rwxr-xr-x 1 root root 177 5月  24 15:48 sed.sh

[root@host4 scripts]# cat sed.sh 
#!/bin/bash
MY_ID=`echo ${MY_POD_NAME} |awk -F‘-‘ ‘{print $NF}‘`
MY_ID=`expr ${MY_ID} + 1`
echo ${MY_ID} > /data/myid
sed -i ‘s/PODIP/‘${MY_POD_IP}‘/g‘ /conf/zoo.cfg
exec "$@"

[root@host4 scripts]# cd ..
[root@host4 zookeeper]# ls
conf  Dockerfile  scripts
[root@host4 zookeeper]# cat Dockerfile 
FROM harbor.test.com/middleware/zookeeper:3.4.10
MAINTAINER rongruixue@163.com

ARG zookeeper_version=3.4.10

COPY conf /conf/
COPY scripts /

这样我们docker build就制作出了image :harbor.test.com/middleware/zookeeper:v3.4.10

然后我们启通过yml启动pod:

apiVersion: apps/v1
kind: StatefulSet
metadata:
  name: zookeeper
spec:
 # podManagementPolicy: Parallel  #此配置决定是否让3个pod同时起来,而不是按 0 1 2的顺序
  serviceName: zookeeper
  replicas: 3
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      app: zookeeper
  template:
    metadata:
      labels:
        app: zookeeper
    spec:
      volumes:
      - name: volume-logs
        hostPath: 
          path: /var/log/zookeeper
      - name: volume-data
        hostPath:
          path: /opt/zookeeper/data
      terminationGracePeriodSeconds: 10
      containers:
      - name: zookeeper
        image: harbor.test.com/middleware/zookeeper:v3.4.10
        imagePullPolicy: Always
        ports:
        - containerPort: 2181
          protocol: TCP
        - containerPort: 2888
          protocol: TCP
        - containerPort: 3888
          protocol: TCP
        env:
        - name: SERVICE_NAME
          value: "zookeeper"
        - name: MY_POD_IP
          valueFrom:
            fieldRef:
              fieldPath: status.podIP
        - name: MY_POD_NAME
          valueFrom:
            fieldRef:
              fieldPath: metadata.name
        volumeMounts:
        - name: volume-logs
          mountPath: /var/log/zookeeper
        #- name: volume-data  此处不能挂载、data到本地,否则如果两个pod分配到同一个节点会相互覆盖,myid也会被覆盖
         # mountPath: /data
        command:
          - /bin/bash
          - -c
          - -x
          - |
            /sed.sh #此脚本作用就是讲podip写入zoo.cfg配置文件中,然后写/data/myid
            sleep 10
            zkServer.sh start-foreground
      nodeSelector:
        zookeeper: enable
---
apiVersion: v1
kind: Service
metadata:
  name: zookeeper
spec:
  ports:
  - port: 2181
  selector:
    app: zookeeper 
  clusterIP: None

k8s部署有状态服务zookeeper示例

标签:ltm   metadata   awk   无法   and   解析   spec   配置文件   ttl   

原文地址:https://blog.51cto.com/4169523/2465894

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!