码迷,mamicode.com
首页 > Web开发 > 详细

深入kubernetes之Pod——一pod多容器

时间:2016-02-22 19:30:03      阅读:19887      评论:0      收藏:0      [点我收藏+]

标签:kubernetes   docker   pod   一pod多容器   

六、深入Pod——一pod多容器

pod多容器,可以说是kube精华所在,让多个同应用的单一容器可以整合到一个类虚拟机中,使其所有容器共用一个vm的资源,提高耦合度,神来之笔,从而方便副本的复制,提高整体的可用性

接下来会从我自己的学习历程,讲诉pod多容器,其中历经的困难,此问题有困扰一个月之久。

1、测试过程:

根据文章:http://www.csdn.net/article/2014-12-18/2823196 ,看到pod还有pod多容器的功能,仅是看了文章便激动不已,pod多容器,可以说是kube的精华所在,具体优点,可以参考第一章的简介14项)


①、第一次测试——失败:

使用文件:nginx_redis_pod.json

目的pod两容器

结论失败,1个启动成功,1个启动失败,一直在重启,不知道是配置文件的问题,还是什么

提前说明下各image启动的端口:

images

port

www.perofu.com:7070/centos6.4_ip_nginx

8022

www.perofu.com:7070/centos6.4_redis

637922

 

[root@www pod_nginx_redis_kube]#  vi nginx_redis_pod.json 

{

  "id": "nginx-1",

  "kind": "Pod",

  "metadata": {

    "name": "nginx-1"

  },

  "apiVersion": "v1",

  "desiredState": {

    "manifest": {

      "version": "v1",

      "id": "nginx-1"

    }

  },

  "spec": {

    "containers": [

      {

        "name": "nginx-1",

        "image": "www.perofu.com:7070/centos6.4_ip_nginx",

        "command": [

          "/etc/rc.local"

        ]

      },

      {

        "name": "redis",

        "image": "www.perofu.com:7070/centos6.4_redis",

        "command": [

          "/etc/rc.local"

        ]

      }

    ]

  },

  "labels": {

    "name": "nginx-1"

  }

}

~                                                                                                                                                                                

"nginx_redis_pod.json" 36L, 602C written

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl create -f  nginx_redis_pod.json    

pods/nginx-1

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod                         

NAME                            READY     STATUS    RESTARTS   AGE

nginx-1                         0/2       Pending   0          4s

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod

NAME                            READY     STATUS    RESTARTS   AGE

nginx-1                         2/2       Running   1          56s

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod

NAME                            READY     STATUS    RESTARTS   AGE

nginx-1                         1/2       Running   2          59s

 

②、第二次测试——失败:

 

使用文件:nginx_redis_pod_test.json

 

目的看到node,都是192.168.16.240,就想,一个pod里的容器,是不是都不能放到一个node中?

结论失败,1个启动成功,1个启动失败,一直在重启,

加了一个node,一个pod就只能在一个node上,有的容器能起来,有的不行,没有什么规律,看官网的多容器yaml写的,怎么就不行?

 

提前说明下各image启动的端口:

images

port

www.perofu.com:7070/centos6.4_ip_nginx

8022

www.perofu.com:7070/centos6.4_redis

637922

 

[root@www pod_nginx_redis_kube]# cat nginx_redis_pod_test.json

---

apiVersion: v1

kind: Pod

metadata:

  name: www

spec:

  containers:

  - name: nginx

    image: www.perofu.com:7070/centos6.4_ip_nginx

    command:

    - /etc/rc.local

  - name: redis-1

    image: www.perofu.com:7070/centos6.4_redis

    command:

- /etc/rc.local

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl describe pod www

Name:                           www

Namespace:                      default

Image(s):                       www.perofu.com:7070/centos6.4_ip_nginx,www.perofu.com:7070/centos6.4_redis

Node:                           192.168.16.240/192.168.16.240

Labels:                         <none>

Status:                         Running

Reason:

Message:

IP:                             172.22.2.2

Replication Controllers:        <none>

Containers:

  nginx:

    Image:              www.perofu.com:7070/centos6.4_ip_nginx

    State:              Running

      Started:          Thu, 21 Jan 2016 16:07:42 +0800

    Ready:              False

    Restart Count:      9

  redis-1:

    Image:              www.perofu.com:7070/centos6.4_redis

    State:              Running

      Started:          Thu, 21 Jan 2016 16:04:19 +0800

    Ready:              True

    Restart Count:      0

Conditions:

  Type          Status

  Ready         False 

Events:

  FirstSeen                             LastSeen                        Count   From                            SubobjectPath                           Reason          Message

  Thu, 21 Jan 2016 16:03:34 +0800       Thu, 21 Jan 2016 16:03:34 +0800 1       {scheduler }                                                            scheduled       Successfully assigned www to 192.168.16.240

  Thu, 21 Jan 2016 16:04:16 +0800       Thu, 21 Jan 2016 16:04:16 +0800 1       {kubelet 192.168.16.240}        implicitly required container POD       pulled          Pod container image "gcr.io/google_containers/pause:0.8.0" already present on machine

  Thu, 21 Jan 2016 16:04:17 +0800       Thu, 21 Jan 2016 16:04:17 +0800 1       {kubelet 192.168.16.240}        implicitly required container POD       created         Created with docker id c7884e18b3ff

  Thu, 21 Jan 2016 16:04:17 +0800       Thu, 21 Jan 2016 16:04:17 +0800 1       {kubelet 192.168.16.240}        implicitly required container POD       started         Started with docker id c7884e18b3ff

  Thu, 21 Jan 2016 16:04:18 +0800       Thu, 21 Jan 2016 16:04:18 +0800 1       {kubelet 192.168.16.240}        spec.containers{nginx}                  created         Created with docker id ce71cdeddab3

  Thu, 21 Jan 2016 16:04:18 +0800       Thu, 21 Jan 2016 16:04:18 +0800 1       {kubelet 192.168.16.240}        spec.containers{nginx}                  started         Started with docker id ce71cdeddab3

  Thu, 21 Jan 2016 16:04:19 +0800       Thu, 21 Jan 2016 16:04:19 +0800 1       {kubelet 192.168.16.240}        spec.containers{redis-1}                created         Created with docker id 15ca805a9e99

  Thu, 21 Jan 2016 16:04:19 +0800       Thu, 21 Jan 2016 16:04:19 +0800 1       {kubelet 192.168.16.240}        spec.containers{redis-1}                started         Started with docker id 15ca805a9e99

  Thu, 21 Jan 2016 16:04:32 +0800       Thu, 21 Jan 2016 16:04:32 +0800 1       {kubelet 192.168.16.240}        spec.containers{nginx}                  created         Created with docker id edfacd145c9b

  Thu, 21 Jan 2016 16:04:32 +0800       Thu, 21 Jan 2016 16:04:32 +0800 1       {kubelet 192.168.16.240}        spec.containers{nginx}                  started         Started with docker id edfacd145c9b

 

 

③、第三次测试——侥幸成功:

 

使用文件:test2_redis_nginx_pod.yaml 

 

目的尝试使用其他image(不带command启动的)进行测试

结论:成功,怀疑是配制文件只能允许一个command

 

说明一下,这一次和之前yaml的差别,

成功:两个images,一个有command,一个没有

失败:两个images,两个都有command

 

再加上网上的文档,包括官网的,都是没有command的,由于我是自己做的images,都是通过command进行启动的,就一直失败。

不是配置文件的问题,而是一个kube的坑,由此可得:

在一个pod中创建多个容器,只能有一个images可以使用command,否则就会有容器一直在重启

 

新发现看到node,都是192.168.16.240,就想,一个pod里的容器,是不是都不能放到一个node中?


新发现之测试结果加了一个node,一个pod就只能在一个node上,有的容器能起来,有的不行,没有什么规律,看官网的多容器yaml写的,怎么就不行?

 

提前说明下各image启动的端口:

images

port

www.perofu.com:7070/registry

5000

www.perofu.com:7070/centos6.4_redis

637922

 

[root@www pod_nginx_redis_kube]# vi test2_redis_nginx_pod.yaml 

 

    name: perofu4

    labels:

      name: perofu4

  spec:

    replicas: 1

    selector:

      name: perofu4

    template:

      metadata:

        labels:

          name: perofu4

      spec:

        containers:

        - name: redis

          image: www.perofu.com:7070/centos6.4_redis

          command:

          - ‘/bin/bash‘

          - ‘-c‘

          - ‘/etc/rc.local‘

        - name: nginx

          image: www.perofu.com:7070/registry

[root@www pod_nginx_redis_kube]#  /usr/bin/kubectl create -f  test2_redis_nginx_pod.yaml 

replicationcontrollers/perofu4

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu-wduar   1/2       Running   1          14s

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu-wduar   2/2       Running   2          2m

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu-wduar   2/2       Running   2          2m

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl describe pod perofu-wduar

Name:                           perofu-wduar

Namespace:                      default

Image(s):                       www.perofu.com:7070/centos6.4_redis,www.perofu.com:7070/registry

Node:                           192.168.16.240/192.168.16.240

Labels:                         name=perofu4

Status:                         Running

Reason:

Message:

IP:                             172.22.2.9

Replication Controllers:        perofu4 (1/1 replicas created)

Containers:

  redis:

    Image:              www.perofu.com:7070/centos6.4_redis

    State:              Running

      Started:          Tue, 16 Feb 2016 16:54:32 +0800

    Ready:              True

    Restart Count:      0

  nginx:

    Image:              www.perofu.com:7070/registry

    State:              Running

      Started:          Tue, 16 Feb 2016 16:54:50 +0800

    Ready:              True

    Restart Count:      2

Conditions:

  Type          Status

  Ready         True 

 

#验证

[root@localhost ~]# docker ps 

CONTAINER ID        IMAGE                                  COMMAND                CREATED             STATUS              PORTS               NAMES

5c6581d919a5        www.perofu.com:7070/registry:latest    "docker-registry"      2 minutes ago       Up 2 minutes                            k8s_nginx.65e9d290_perofu-wduar_default_eaa53352-cc1f-11e5-938c-000c29eae008_03b774d2   

08315bce234b        centos6.4_redis:latest                 "/bin/bash -c /etc/r   2 minutes ago       Up 2 minutes                            k8s_redis.510ddc4c_perofu-wduar_default_eaa53352-cc1f-11e5-938c-000c29eae008_3e6b7907   

15bd3a44a717        gcr.io/google_containers/pause:0.8.0   "/pause"               2 minutes ago       Up 2 minutes                            k8s_POD.e4cc795_perofu-wduar_default_eaa53352-cc1f-11e5-938c-000c29eae008_8c768cca      

[root@localhost ~]# 

[root@localhost ~]# 

[root@localhost ~]# docker exec -it 08315bce234b /bin/bash

[root@perofu-wduar /]# 

[root@perofu-wduar /]# ifconfig 

eth0      Link encap:Ethernet  HWaddr 02:42:AC:16:02:09  

          inet addr:172.22.2.9  Bcast:0.0.0.0  Mask:255.255.255.0

          inet6 addr: fe80::42:acff:fe16:209/64 Scope:Link

          UP BROADCAST RUNNING  MTU:1500  Metric:1

          RX packets:10 errors:0 dropped:0 overruns:0 frame:0

          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0 

          RX bytes:823 (823.0 b)  TX bytes:718 (718.0 b)

 

lo        Link encap:Local Loopback  

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:8 errors:0 dropped:0 overruns:0 frame:0

          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0 

          RX bytes:518 (518.0 b)  TX bytes:518 (518.0 b)

 

[root@perofu-wduar /]# netstat -anplt

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address               Foreign Address             State       PID/Program name   

tcp        0      0 0.0.0.0:22                  0.0.0.0:*                   LISTEN      26/sshd             

tcp        0      0 0.0.0.0:5000                0.0.0.0:*                   LISTEN      -                   

tcp        0      0 0.0.0.0:6379                0.0.0.0:*                   LISTEN      27/redis-server 0.0 

tcp        0      0 :::22                       :::*                        LISTEN      26/sshd             

[root@perofu-wduar /]# 

[root@perofu-wduar /]# ps axu

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

root         1  0.0  0.2  11348  1288 ?        Ss   03:54   0:00 /bin/sh /etc/rc.local

root        26  0.0  0.6  64168  2972 ?        S    03:54   0:00 /usr/sbin/sshd -D

root        27  0.1  0.3  36484  1896 ?        Ssl  03:54   0:00 redis-server 0.0.0.0:6379         

root        30  0.1  0.3  11484  1816 ?        S    03:59   0:00 /bin/bash

root        45  0.0  0.2  13364  1044 ?        R+   04:00   0:00 ps axu

[root@perofu-wduar /]#       

[root@perofu-wduar /]# ps -ef

UID        PID  PPID  C STIME TTY          TIME CMD

root         1     0  0 03:54 ?        00:00:00 /bin/sh /etc/rc.local

root        26     1  0 03:54 ?        00:00:00 /usr/sbin/sshd -D

root        27     1  0 03:54 ?        00:00:00 redis-server 0.0.0.0:6379         

root        30     0  0 03:59 ?        00:00:00 /bin/bash

root        46    30  0 04:01 ?        00:00:00 ps -ef

 

 

#登陆第二个容器:

[root@localhost ~]# docker exec -it 5c6581d919a5 /bin/bash

root@perofu-wduar:/# 

root@perofu-wduar:/# ifconfig 

eth0      Link encap:Ethernet  HWaddr 02:42:ac:16:02:09  

          inet addr:172.22.2.9  Bcast:0.0.0.0  Mask:255.255.255.0

          inet6 addr: fe80::42:acff:fe16:209/64 Scope:Link

          UP BROADCAST RUNNING  MTU:1500  Metric:1

          RX packets:10 errors:0 dropped:0 overruns:0 frame:0

          TX packets:10 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0 

          RX bytes:823 (823.0 B)  TX bytes:718 (718.0 B)

 

lo        Link encap:Local Loopback  

          inet addr:127.0.0.1  Mask:255.0.0.0

          inet6 addr: ::1/128 Scope:Host

          UP LOOPBACK RUNNING  MTU:65536  Metric:1

          RX packets:8 errors:0 dropped:0 overruns:0 frame:0

          TX packets:8 errors:0 dropped:0 overruns:0 carrier:0

          collisions:0 txqueuelen:0 

          RX bytes:518 (518.0 B)  TX bytes:518 (518.0 B)

 

root@perofu-wduar:/# 

root@perofu-wduar:/# ps axu

USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND

root         1  0.0  0.9  55520  4716 ?        Ss   08:54   0:00 /usr/bin/python /usr/local/bin/gunicorn --access-logfile - --error-logfile - --max-requests 100 -k gevent --grac

root        16  0.9  5.8 104620 28940 ?        S    08:54   0:07 /usr/bin/python /usr/local/bin/gunicorn --access-logfile - --error-logfile - --max-requests 100 -k gevent --grac

root        17  0.9  6.3 104632 31392 ?        S    08:54   0:07 /usr/bin/python /usr/local/bin/gunicorn --access-logfile - --error-logfile - --max-requests 100 -k gevent --grac

root        20  0.9  6.3 104644 31356 ?        S    08:54   0:07 /usr/bin/python /usr/local/bin/gunicorn --access-logfile - --error-logfile - --max-requests 100 -k gevent --grac

root        21  0.9  6.3 104656 31456 ?        S    08:54   0:07 /usr/bin/python /usr/local/bin/gunicorn --access-logfile - --error-logfile - --max-requests 100 -k gevent --grac

root        28  0.7  0.3  18140  1968 ?        S    09:08   0:00 /bin/bash

root        44  0.0  0.2  15568  1132 ?        R+   09:08   0:00 ps axu

root@perofu-wduar:/# 

root@perofu-wduar:/# netstat -anptl

Active Internet connections (servers and established)

Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name

tcp        0      0 0.0.0.0:22              0.0.0.0:*               LISTEN      -               

tcp        0      0 0.0.0.0:5000            0.0.0.0:*               LISTEN      1/python        

tcp        0      0 0.0.0.0:6379            0.0.0.0:*               LISTEN      -               

tcp6       0      0 :::22                   :::*                    LISTEN      -               

root@perofu-wduar:/# 

 

#在同一个pod的两个容器中,都有ip172.22.2.9,都出现了redis6397端口和python端口5000

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu-wduar   2/2       Running   2          17h

 

④、第四次测试——失败:

 

使用文件:test6_redis_nginx_pod_faile.yaml

 

目的重新build镜像,将启动脚本放到容器中,再测试是否是command的问题

结论失败,为何一pod两容器,无command,都还是有一个容器,在一直重启?

 

提前说明下各image启动的端口:

images

port

www.perofu.com:7070/centos6.4_ip_nginx_cmd

8022

www.perofu.com:7070/centos6.4_ip_redis_cmd

637922

 

[root@www pod_nginx_redis_kube]# vi test6_redis_nginx_pod_faile.yaml 

 

  apiVersion: v1

  kind: ReplicationController

  metadata:

    name: perofu1

    labels:

      name: perofu1

  spec:

    replicas: 1

    selector:

      name: perofu1

    template:

      metadata:

        labels:

          name: perofu1

      spec:

        containers:

        - name: nginx5

          image: www.perofu.com:7070/centos6.4_ip_nginx_cmd

        - name: redis5

          image: www.perofu.com:7070/centos6.4_ip_redis_cmd

~                                                                                                                                                                                

"test6_redis_nginx_pod_ok.yaml" 20L, 439C written

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl create -f test6_redis_nginx_pod_faile.yaml

replicationcontrollers/perofu1

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu1-9h8of   0/2       Pending   0          3s

perofu-wduar   2/2       Running   0          19h

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu1-9h8of   1/2       Running   2          30s

perofu-wduar   2/2       Running   0          19h

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu1-9h8of   1/2       Running   10         2m

perofu-wduar   2/2       Running   0          19h

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl get pod 

NAME             READY     STATUS    RESTARTS   AGE

perofu1-9h8of   1/2       Running   8          3m

perofu-wduar   2/2       Running   0          19h

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# 

[root@www pod_nginx_redis_kube]# /usr/bin/kubectl describe pod perofu1-9h8of

Name:                           perofu1-9h8of

Namespace:                      default

Image(s):                       www.perofu.com:7070/centos6.4_ip_nginx_cmd,www.perofu.com:7070/centos6.4_ip_redis_cmd

Node:                           192.168.16.240/192.168.16.240

Labels:                         name=perofu1

Status:                         Running

Reason:

Message:

IP:                             172.22.2.14

Replication Controllers:        perofu1 (1/1 replicas created)

Containers:

  nginx5:

    Image:              www.perofu.com:7070/centos6.4_ip_nginx_cmd

    State:              Running

      Started:          Wed, 17 Feb 2016 12:12:12 +0800

    Ready:              True

    Restart Count:      0

  redis5:

    Image:              www.perofu.com:7070/centos6.4_ip_redis_cmd

    State:              Running

      Started:          Wed, 17 Feb 2016 12:16:31 +0800

    Ready:              False

    Restart Count:      11

Conditions:

  Type          Status

  Ready         False 

 

⑤、第五次测试——找到问题(端口冲突)

 

使用文件:test12_redis_nginx_pod_ok.yaml

 

突发奇想自己做的镜像,每个都会启动ssh的,会不会是端口冲突?

目的验证是否是端口冲突,导致容器一直重启

结论确实是因为端口冲突,导致容器一直重启(sshd

 

提前说明下各image启动的端口:

images

port

www.perofu.com:7070/centos6.4_ip_nginx_cmd

8022

www.perofu.com:7070/centos6.4_ip_redis_cmd_without_sshd_v1

6379

 

[root@www pod_nginx_redis_kube]# cat test12_redis_nginx_pod_ok.yaml

  apiVersion: v1

  kind: ReplicationController

  metadata:

    name: wechatv

    labels:

      name: wechatv

  spec:

    replicas: 1

    selector:

      name: wechatv

    template:

      metadata:

        labels:

          name: wechatv

      spec:

        containers:

        - name: nginx

          image: www.perofu.com:7070/centos6.4_ip_nginx_cmd

        - name: redis

          image: www.perofu.com:7070/centos6.4_ip_redis_cmd_without_sshd_v1

[root@www pod_nginx_redis_kube]# 

 

2、问题总结:

①、看来专门为Docker做的image,都是启动ssh的,无法放到kube的同一个pod中(主要问题所在),需要重新为kube创建新的image

 

②、由于使用kube管理的容器,无法查到为何会重启,查了kube的日志没看到,容器直接重启,无法看到容器启动日志,所以不是很快的确定问题所在

 

③、以后使用kubedocker镜像的build,需要做一套有ssh的,和一套没有ssh,即Dockerfile中,不要在/etc/rc.local中加入ssh启动,即可,利用build缓存的机制,只要先做好一套image,再修改Dockerfile/etc/rc.localssh部分,再build一次就可以了,很快,真的很快

 

④、在使用kube的同一pod多容器时,最好还是搭配上一个有ssh的,这样方便用户登陆



本文出自 “无咎” 博客,谢绝转载!

深入kubernetes之Pod——一pod多容器

标签:kubernetes   docker   pod   一pod多容器   

原文地址:http://perofu.blog.51cto.com/6061242/1743980

(1)
(6)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!