码迷,mamicode.com
首页 > Web开发 > 详细

kubernetes 安装手册(成功版)

时间:2018-06-02 12:58:52      阅读:298      评论:0      收藏:0      [点我收藏+]

标签:复制   nis   over   restart   exce   get   1.7   rev   latest   

管理组件采用staticPod或者daemonSet形式跑的,宿主机os能跑docker应该本篇教程能大多适用
安装完成仅供学习和实验

本次安裝的版本:

  • Kubernetes v1.10.0 (1.10.0和1.10.3亲测成功)
  • CNI v0.6.0
  • Etcd v3.1.13
  • Calico v3.0.4
  • Docker CE latest version(18.03)

技术分享图片

节点信息
本教学将以下列节点数与规格来进行部署Kubernetes集群,系统可采用Ubuntu 16.xCentOS 7.x

IPHostnameCPUMemory
192.16.35.11 K8S-M1 1 4G
192.16.35.12 K8S-M2 1 4G
192.16.35.13 K8S-M3 1 4G
192.16.35.14 K8S-N1 1 4G
192.16.35.15 K8S-N2 1 4G
192.16.35.16 K8S-N3 1 4G

另外由所有master节点提供一组VIP 192.16.35.10

  • 所有操作全部用root使用者进行(方便用),以SRE来说不推荐。
  • 可以下载Vagrantfile来建立Virtualbox虚拟机集群。不过需要注意机器资源是否足够

事前准备

  • 所有机器彼此网路互通,并且k8s-m1SSH登入其他节点为passwdless。
  • 所有防火墙与SELinux 已关闭。如CentOS:

    1
    2
    3
    4
    $ systemctl stop firewalld && systemctl disable firewalld
    $ setenforce 0
    $ vim /etc/selinux/config
    SELINUX=disabled
  • 所有机器需要设定/etc/hosts解析到所有集群主机。

    1
    2
    3
    4
    5
    6
    7
    ...
    192.16.35.11 k8s-m1
    192.16.35.12 k8s-m2
    192.16.35.13 k8s-m3
    192.16.35.14 k8s-n1
    192.16.35.15 k8s-n2
    192.16.35.16 k8s-n3
  • 所有机器需要安装Docker CE 版本的容器引擎:

    1
    $ curl -fsSL "https://get.docker.com/" | sh
  • 不管是在Ubuntu或CentOS都只需要执行该指令就会自动安装最新版Docker。
  • CentOS安装完成后,需要再执行以下指令:
1
$ systemctl enable docker && systemctl start docker
  • 所有机器需要设定/etc/sysctl.d/k8s.conf的系统参数。

    1
    2
    3
    4
    5
    6
    7
    $ cat <<EOF > /etc/sysctl.d/k8s.conf
    net.ipv4.ip_forward = 1
    net.bridge.bridge-nf-call-ip6tables = 1
    net.bridge.bridge-nf-call-iptables = 1
    EOF

    $ sysctl -p /etc/sysctl.d/k8s.conf
  • Kubernetes v1.8+要求关闭系统Swap,若不关闭则需要修改kubelet设定参数,在所有机器使用以下指令关闭swap并注释掉/etc/fstab中swap的行:

    1
    2
    $ swapoff -a && sysctl -w vm.swappiness=0
    $ sed -ri ‘/^[^#]*swap/s@^@#@‘ /etc/fstab
  • 确保getenforce的值是Disabled,如果不是请重启

  • 所有机器提前拉取以下镜像

1
2
3
4
REPOSITORY               TAG                 IMAGE ID            CREATED             SIZE
quay.io/calico/node v3.0.4 5361c5a52912 8 weeks ago 278MB
quay.io/calico/cni v2.0.3 cef0252b1749 2 months ago 69.1MB
k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 5 months ago 742kB

这三个因为墙的原因会拉取不到,我已经save成文件了(有工具的可以直接pull上面镜像)
文件地址是https://pan.baidu.com/s/1v7uN4ht-7qvA1uk9ZMmuMA
上面是百度云,下载不了或者限速的可以用下面七牛云地址下载并导入镜像

1
2
$ wget http://ols7lqkih.bkt.clouddn.com/images.tar.gz
$ docker load -i images.tar.gz

 

  • 所有Node提前拉取以下镜像
1
quay.io/calico/kube-controllers                      v2.0.2              0754e1c707e7        2 months ago        55.1MB

同样被墙了,拉取不到用我的七牛云地址导入

1
2
$ wget http://ols7lqkih.bkt.clouddn.com/calico-kube-proxy-adm64.tar.gz
$ docker load -i calico-kube-proxy-adm64.tar.gz

 

  • 所有机器下载Kubernetes二进制执行档:

无越墙工具的,我已把kubectl和kubelet上传到我的七牛云了,使用下面下载

1
2
3
4
5
6
7
8
9
10
11
12
$ wget http://ols7lqkih.bkt.clouddn.com/kubelet -O /usr/local/bin/kubelet
$ chmod +x /usr/local/bin/kubelet
# node 请忽略下载 kubectl
$ wget http://ols7lqkih.bkt.clouddn.com/kubectl -O /usr/local/bin/kubectl
$ chmod +x /usr/local/bin/kubectl

# md5值为以下,自行对比下看看文件是否损坏了

[root@k8s-m1 ~]# md5sum /usr/local/bin/kubelet
a3ced404a71f94d2fa9230635ed4e407 kubelet
[root@k8s-m1 ~]# md5sum /usr/local/bin/kubectl
e1f801301614463e1f13cf28b4443608 kubectl

 

有工具的使用下面的原地址

1
2
3
4
5
6
7
$ export KUBE_URL="https://storage.googleapis.com/kubernetes-release/release/v1.10.0/bin/linux/amd64"
$ wget "${KUBE_URL}/kubelet" -O /usr/local/bin/kubelet
$ chmod +x /usr/local/bin/kubelet

# node 请忽略下载 kubectl
$ wget "${KUBE_URL}/kubectl" -O /usr/local/bin/kubectl
$ chmod +x /usr/local/bin/kubectl
  • 所有机器下载Kubernetes CNI 二进制执行档:(centos命令报错的话建议直接下载后解压到目录里)

    1
    2
    3
    mkdir -p /opt/cni/bin && cd /opt/cni/bin
    export CNI_URL="https://github.com/containernetworking/plugins/releases/download"
    wget -qO- "${CNI_URL}/v0.6.0/cni-plugins-amd64-v0.6.0.tgz" | tar -zx
  • k8s-m1需要安裝CFSSL工具,这将会用來建立 TLS Certificates。

    1
    2
    3
    4
    $ export CFSSL_URL="https://pkg.cfssl.org/R1.2"
    $ wget "${CFSSL_URL}/cfssl_linux-amd64" -O /usr/local/bin/cfssl
    $ wget "${CFSSL_URL}/cfssljson_linux-amd64" -O /usr/local/bin/cfssljson
    $ chmod +x /usr/local/bin/cfssl /usr/local/bin/cfssljson

建立集群CA keys 与Certificates
在这个部分,将需要产生多个元件的Certificates,这包含Etcd、Kubernetes 元件等,并且每个集群都会有一个根数位凭证认证机构(Root Certificate Authority)被用在认证API Server 与Kubelet 端的凭证。

  • PS这边要注意CA JSON档的CN(Common Name)O(Organization)等内容是会影响Kubernetes元件认证的。

Etcd
首先在k8s-m1建立/etc/etcd/ssl文件夹,然后进入目录完成以下操作。

1
2
$ mkdir -p /etc/etcd/ssl && cd /etc/etcd/ssl
$ export PKI_URL="https://kairen.github.io/files/manual-v1.10/pki"

 

下载ca-config.jsonetcd-ca-csr.json文件,并从CSR json产生CA keys与Certificate:

1
2
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/etcd-ca-csr.json"
$ cfssl gencert -initca etcd-ca-csr.json | cfssljson -bare etcd-ca

 

下载etcd-csr.json文件,并产生Etcd证书:

1
2
3
4
5
6
7
8
$ wget "${PKI_URL}/etcd-csr.json"
$ cfssl gencert \
-ca=etcd-ca.pem \
-ca-key=etcd-ca-key.pem \
-config=ca-config.json \
-hostname=127.0.0.1,192.16.35.11,192.16.35.12,192.16.35.13 \
-profile=kubernetes \
etcd-csr.json | cfssljson -bare etcd

 

  • -hostname需修改成所有masters 节点。

完成后删除不必要文件:

1
$ rm -rf *.json *.csr

 

确认/etc/etcd/ssl有以下文件:

1
2
$ ls /etc/etcd/ssl
etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem

 

复制相关文件至其他Etcd节点,这边为所有master节点:

1
2
3
4
5
6
7
$ for NODE in k8s-m2 k8s-m3; do
echo "--- $NODE ---"
ssh ${NODE} "mkdir -p /etc/etcd/ssl"
for FILE in etcd-ca-key.pem etcd-ca.pem etcd-key.pem etcd.pem; do
scp /etc/etcd/ssl/${FILE} ${NODE}:/etc/etcd/ssl/${FILE}
done
done

 

Kubernetes
k8s-m1建立pki文件夹,然后进入目录完成以下章节操作。

1
2
3
$ mkdir -p /etc/kubernetes/pki && cd /etc/kubernetes/pki
$ export PKI_URL="https://kairen.github.io/files/manual-v1.10/pki"
$ export KUBE_APISERVER="https://192.16.35.10:6443"

 

下载ca-config.jsonca-csr.json文件,并产生CA凭证:

1
2
3
4
$ wget "${PKI_URL}/ca-config.json" "${PKI_URL}/ca-csr.json"
$ cfssl gencert -initca ca-csr.json | cfssljson -bare ca
$ ls ca*.pem
ca-key.pem ca.pem

 

API Server Certificate
下载apiserver-csr.json文件,并产生kube-apiserver凭证:

1
2
3
4
5
6
7
8
9
10
11
$ wget "${PKI_URL}/apiserver-csr.json"
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=10.96.0.1,192.16.35.10,127.0.0.1,kubernetes.default \
-profile=kubernetes \
apiserver-csr.json | cfssljson -bare apiserver

$ ls apiserver*.pem
apiserver-key.pem apiserver.pem

 

  • 这边-hostname10.96.0.1是Cluster IP的Kubernetes端点;
  • 192.16.35.10为虚拟IP 位址(VIP);
  • kubernetes.default为Kubernets DN。

Front Proxy Certificate
下载front-proxy-ca-csr.json文件,并产生Front Proxy CA金钥,Front Proxy主要是用在API aggregator上:

1
2
3
4
5
6
$ wget "${PKI_URL}/front-proxy-ca-csr.json"
$ cfssl gencert \
-initca front-proxy-ca-csr.json | cfssljson -bare front-proxy-ca

$ ls front-proxy-ca*.pem
front-proxy-ca-key.pem front-proxy-ca.pem

 

下载front-proxy-client-csr.json档案,并产生front-proxy-client证书:

1
2
3
4
5
6
7
8
9
10
$ wget "${PKI_URL}/front-proxy-client-csr.json"
$ cfssl gencert \
-ca=front-proxy-ca.pem \
-ca-key=front-proxy-ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
front-proxy-client-csr.json | cfssljson -bare front-proxy-client

$ ls front-proxy-client*.pem
front-proxy-client-key.pem front-proxy-client.pem

 

Admin Certificate
下载admin-csr.json文件,并产生admin certificate凭证:

1
2
3
4
5
6
7
8
9
10
$ wget "${PKI_URL}/admin-csr.json"
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
admin-csr.json | cfssljson -bare admin

$ ls admin*.pem
admin-key.pem admin.pem

 

接着通过以下指令产生名称为admin.conf的kubeconfig文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# admin set cluster
$ kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=../admin.conf

# admin set credentials
$ kubectl config set-credentials kubernetes-admin \
--client-certificate=admin.pem \
--client-key=admin-key.pem \
--embed-certs=true \
--kubeconfig=../admin.conf

# admin set context
$ kubectl config set-context kubernetes-admin@kubernetes \
--cluster=kubernetes \
--user=kubernetes-admin \
--kubeconfig=../admin.conf

# admin set default context
$ kubectl config use-context kubernetes-admin@kubernetes \
--kubeconfig=../admin.conf

 

Controller Manager Certificate
下载manager-csr.json档案,并产生kube-controller-manager certificate凭证:

1
2
3
4
5
6
7
8
9
10
$ wget "${PKI_URL}/manager-csr.json"
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
manager-csr.json | cfssljson -bare controller-manager

$ ls controller-manager*.pem
controller-manager-key.pem controller-manager.pem

 

接着通过以下指令产生名称为controller-manager.conf的kubeconfig文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# controller-manager set cluster
$ kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=../controller-manager.conf

# controller-manager set credentials
$ kubectl config set-credentials system:kube-controller-manager \
--client-certificate=controller-manager.pem \
--client-key=controller-manager-key.pem \
--embed-certs=true \
--kubeconfig=../controller-manager.conf

# controller-manager set context
$ kubectl config set-context system:kube-controller-manager@kubernetes \
--cluster=kubernetes \
--user=system:kube-controller-manager \
--kubeconfig=../controller-manager.conf

# controller-manager set default context
$ kubectl config use-context system:kube-controller-manager@kubernetes \
--kubeconfig=../controller-manager.conf

 

Scheduler Certificate
下载scheduler-csr.json文件,并产生kube-scheduler certificate凭证:

1
2
3
4
5
6
7
8
9
10
$ wget "${PKI_URL}/scheduler-csr.json"
$ cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-profile=kubernetes \
scheduler-csr.json | cfssljson -bare scheduler

$ ls scheduler*.pem
scheduler-key.pem scheduler.pem

 

接着通过以下指令产生名称为scheduler.conf的kubeconfig文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
# scheduler set cluster
$ kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=../scheduler.conf

# scheduler set credentials
$ kubectl config set-credentials system:kube-scheduler \
--client-certificate=scheduler.pem \
--client-key=scheduler-key.pem \
--embed-certs=true \
--kubeconfig=../scheduler.conf

# scheduler set context
$ kubectl config set-context system:kube-scheduler@kubernetes \
--cluster=kubernetes \
--user=system:kube-scheduler \
--kubeconfig=../scheduler.conf

# scheduler use default context
$ kubectl config use-context system:kube-scheduler@kubernetes \
--kubeconfig=../scheduler.conf

 

Master Kubelet Certificate
接着在k8s-m1下载kubelet-csr.json档案,并产生凭证:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ wget "${PKI_URL}/kubelet-csr.json"
$ for NODE in k8s-m1 k8s-m2 k8s-m3; do
echo "--- $NODE ---"
cp kubelet-csr.json kubelet-$NODE-csr.json;
sed -i "s/\$NODE/$NODE/g" kubelet-$NODE-csr.json;
cfssl gencert \
-ca=ca.pem \
-ca-key=ca-key.pem \
-config=ca-config.json \
-hostname=$NODE \
-profile=kubernetes \
kubelet-$NODE-csr.json | cfssljson -bare kubelet-$NODE
done

$ ls kubelet*.pem
kubelet-k8s-m1-key.pem kubelet-k8s-m1.pem kubelet-k8s-m2-key.pem kubelet-k8s-m2.pem kubelet-k8s-m3-key.pem kubelet-k8s-m3.pem
  • 这边需要依据节点修改-hostname$NODE

完成后复制kubelet凭证至其他master节点:

1
2
3
4
5
6
7
$ for NODE in k8s-m2 k8s-m3; do
echo "--- $NODE ---"
ssh ${NODE} "mkdir -p /etc/kubernetes/pki"
for FILE in kubelet-$NODE-key.pem kubelet-$NODE.pem ca.pem; do
scp /etc/kubernetes/pki/${FILE} ${NODE}:/etc/kubernetes/pki/${FILE}
done
done

 

接着在k8s-m1执行以下指令产生名称为kubelet.conf的kubeconfig文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
$ for NODE in k8s-m1 k8s-m2 k8s-m3; do
echo "--- $NODE ---"
ssh ${NODE} "cd /etc/kubernetes/pki && \
kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=../kubelet.conf && \
kubectl config set-credentials system:node:${NODE} \
--client-certificate=kubelet-${NODE}.pem \
--client-key=kubelet-${NODE}-key.pem \
--embed-certs=true \
--kubeconfig=../kubelet.conf && \
kubectl config set-context system:node:${NODE}@kubernetes \
--cluster=kubernetes \
--user=system:node:${NODE} \
--kubeconfig=../kubelet.conf && \
kubectl config use-context system:node:${NODE}@kubernetes \
--kubeconfig=../kubelet.conf && \
rm kubelet-${NODE}.pem kubelet-${NODE}-key.pem"
done

Service Account Key
Service account 不是通过CA 进行认证,因此不要通过CA 来做Service account key 的检查,这边建立一组Private 与Public 密钥提供给Service account key 使用:
k8s-m1执行以下指令

1
2
3
4
$ openssl genrsa -out sa.key 2048
$ openssl rsa -in sa.key -pubout -out sa.pub
$ ls sa.*
sa.key sa.pub

 

删除不必要文件
所有资讯准备完成后,就可以将一些不必要文件删除:

1
$ rm -rf *.json *.csr scheduler*.pem controller-manager*.pem admin*.pem kubelet*.pem

 

复制文件至其他节点
复制凭证文件至其他master节点:

1
2
3
4
5
6
$ for NODE in k8s-m2 k8s-m3; do
echo "--- $NODE ---"
for FILE in $(ls /etc/kubernetes/pki/); do
scp /etc/kubernetes/pki/${FILE} ${NODE}:/etc/kubernetes/pki/${FILE}
done
done

 

复制Kubernetes config文件至其他master节点:

1
2
3
4
5
6
$ for NODE in k8s-m2 k8s-m3; do
echo "--- $NODE ---"
for FILE in admin.conf controller-manager.conf scheduler.conf; do
scp /etc/kubernetes/${FILE} ${NODE}:/etc/kubernetes/${FILE}
done
done

 

Kubernetes Masters
本部分将说明如何建立与设定Kubernetes Master 角色,过程中会部署以下元件:

  • kube-apiserver:提供REST APIs,包含授权、认证与状态储存等。
  • kube-controller-manager:负责维护集群的状态,如自动扩展,滚动更新等。
  • kube-scheduler:负责资源排程,依据预定的排程策略将Pod分配到对应节点上。
  • Etcd:储存集群所有状态的Key/Value储存系统。
  • HAProxy:提供负载平衡器。
  • Keepalived:提供虚拟网路位址(VIP)。

部署与设定
首先在所有master节点下载部署元件的YAML文件,这边不采用二进制执行档与Systemd来管理这些元件,全部采用Static Pod来达成。这边将档案下载至/etc/kubernetes/manifests目录:

(友情提醒镜像需要工具才能pull
没有工具请把镜像的gcr.io/google_containers和k8s.gcr.io部分换成mirrorgooglecontainers
例如
gcr.io/google_containers/kube-apiserver-amd64 改成
mirrorgooglecontainers/kube-scheduler-amd64
keepalived里的interface网卡名改为各自宿主机的网卡名
后续的所有文件里的镜像名同理(没有越墙工具就这样做)
)

1
2
3
4
5
6
7
8
9
10
11
12
13
$ export CORE_URL="https://kairen.github.io/files/manual-v1.10/master"
$ mkdir -p /etc/kubernetes/manifests && cd /etc/kubernetes/manifests
$ for FILE in kube-apiserver kube-controller-manager kube-scheduler haproxy keepalived etcd etcd.config; do
wget "${CORE_URL}/${FILE}.yml.conf" -O ${FILE}.yml
if [ ${FILE} == "etcd.config" ]; then
mv etcd.config.yml /etc/etcd/etcd.config.yml
sed -i "s/\${HOSTNAME}/${HOSTNAME}/g" /etc/etcd/etcd.config.yml
sed -i "s/\${PUBLIC_IP}/$(hostname -i)/g" /etc/etcd/etcd.config.yml
fi
done

$ ls /etc/kubernetes/manifests
etcd.yml haproxy.yml keepalived.yml kube-apiserver.yml kube-controller-manager.yml kube-scheduler.yml
  • 若IP与教学设定不同的话,请记得修改YAML文件,keepalived.yml里记得把interface改成宿主机的网卡名。
  • kube-apiserver中的·NodeRestriction·请参考Using Node Authorization

产生一个用来加密Etcd 的Key:

1
2
$ head -c 32 /dev/urandom | base64
SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ=

 

  • 注意每台master节点需要用一样的Key。

然后在每台master机器的/etc/kubernetes/目录下,使用上面的key配合下面命令来建立encryption.yml的加密YAML文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ cat <<EOF > /etc/kubernetes/encryption.yml
kind: EncryptionConfig
apiVersion: v1
resources:
- resources:
- secrets
providers:
- aescbc:
keys:
- name: key1
secret: SUpbL4juUYyvxj3/gonV5xVEx8j769/99TSAf8YT/sQ=
- identity: {}
EOF

 

然后在每台master机器/etc/kubernetes/目录下,建立audit-policy.yml的进阶稽核策略YAML文件:

1
2
3
4
5
6
$ cat <<EOF > /etc/kubernetes/audit-policy.yml
apiVersion: audit.k8s.io/v1beta1
kind: Policy
rules:
- level: Metadata
EOF

 

  • Audit Policy请参考这篇Auditing

每台master机器下载haproxy.cfg档案来提供给HAProxy容器使用:

1
2
$ mkdir -p /etc/haproxy/
$ wget "${CORE_URL}/haproxy.cfg" -O /etc/haproxy/haproxy.cfg

 

  • 若与本教学IP 不同的话,请记得修改设定档。

每台master机器下载kubelet.service相关文件来管理kubelet:

1
2
3
$ mkdir -p /etc/systemd/system/kubelet.service.d
$ wget "${CORE_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service
$ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf

 

  • 若cluster dns或domain有改变的话,需要修改10-kubelet.conf。

最后每台master机器建立var 存放资讯,然后启动kubelet 服务:

1
2
$ mkdir -p /var/lib/kubelet /var/log/kubernetes /var/lib/etcd
$ systemctl enable kubelet.service && systemctl start kubelet.service

 

完成后会需要一段时间来下载映像档与启动元件,可以利用该指令来监看:

1
2
3
4
5
6
7
8
9
10
11
12
$ watch netstat -ntlp
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 127.0.0.1:10248 0.0.0.0:* LISTEN 10344/kubelet
tcp 0 0 127.0.0.1:10251 0.0.0.0:* LISTEN 11324/kube-schedule
tcp 0 0 0.0.0.0:6443 0.0.0.0:* LISTEN 11416/haproxy
tcp 0 0 127.0.0.1:10252 0.0.0.0:* LISTEN 11235/kube-controll
tcp 0 0 0.0.0.0:9090 0.0.0.0:* LISTEN 11416/haproxy
tcp6 0 0 :::2379 :::* LISTEN 10479/etcd
tcp6 0 0 :::2380 :::* LISTEN 10479/etcd
tcp6 0 0 :::10255 :::* LISTEN 10344/kubelet
tcp6 0 0 :::5443 :::* LISTEN 11295/kube-apiserve

 

  • 此处需要等待时间来拉取镜像,需要耐心等待
  • 若看到以上资讯表示服务正常启动,若发生问题可以用docker指令来查看。
  • 若看到关键的几个管理组件容器退出的话就说明操作错误

上面会去拉取镜像,需要一段时间,具体好没好可以下面的操作来看状态对不对

验证集群
完成后,在任意一台master节点复制admin kubeconfig文件,并通过简单指令验证:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
$ cp /etc/kubernetes/admin.conf ~/.kube/config
$ kubectl get cs
NAME STATUS MESSAGE ERROR
controller-manager Healthy ok
scheduler Healthy ok
etcd-2 Healthy {"health": "true"}
etcd-1 Healthy {"health": "true"}
etcd-0 Healthy {"health": "true"}

$ kubectl get node
NAME STATUS ROLES AGE VERSION
k8s-m1 NotReady master 52s v1.10.0
k8s-m2 NotReady master 51s v1.10.0
k8s-m3 NotReady master 50s v1.10.0

$ kubectl -n kube-system get po
NAME READY STATUS RESTARTS AGE
etcd-k8s-m1 1/1 Running 0 7m
etcd-k8s-m2 1/1 Running 0 8m
etcd-k8s-m3 1/1 Running 0 7m
haproxy-k8s-m1 1/1 Running 0 7m
haproxy-k8s-m2 1/1 Running 0 8m
haproxy-k8s-m3 1/1 Running 0 8m
keepalived-k8s-m1 1/1 Running 0 8m
keepalived-k8s-m2 1/1 Running 0 7m
keepalived-k8s-m3 1/1 Running 0 7m
kube-apiserver-k8s-m1 1/1 Running 0 7m
kube-apiserver-k8s-m2 1/1 Running 0 6m
kube-apiserver-k8s-m3 1/1 Running 0 7m
kube-controller-manager-k8s-m1 1/1 Running 0 8m
kube-controller-manager-k8s-m2 1/1 Running 0 8m
kube-controller-manager-k8s-m3 1/1 Running 0 8m
kube-scheduler-k8s-m1 1/1 Running 0 8m
kube-scheduler-k8s-m2 1/1 Running 0 8m
kube-scheduler-k8s-m3 1/1 Running 0 8m

 

接着确认服务能够执行logs 等指令:

1
2
$ kubectl -n kube-system logs -f kube-scheduler-k8s-m2
Error from server (Forbidden): Forbidden (user=kube-apiserver, verb=get, resource=nodes, subresource=proxy) ( pods/log kube-scheduler-k8s-m2)

 

  • 这边会发现出现403 Forbidden问题,这是因为kube-apiserveruser并没有nodes的资源存取权限,属于正常。

后面kubectl的命令不需要每个master都执行了,任意一台master执行就行了
kubectl可以从url读取内容来创建内容里的资源对象,也可以本地文件读取
后面kubectl命令结尾的yaml文件记得先下载下来改下里面的镜像仓库部分gcr.io/google_containers和k8s.gcr.io部分换成mirrorgooglecontainers,还有里面的apiserver ip啥的
然后-f后面指定文件路径即可
上面建议后面kubectl命令部分同理,不在多说废话

1
2
3
4
5
6
7
8
9
$ kubectl apply -f "${CORE_URL}/apiserver-to-kubelet-rbac.yml.conf"
clusterrole.rbac.authorization.k8s.io "system:kube-apiserver-to-kubelet" configured
clusterrolebinding.rbac.authorization.k8s.io "system:kube-apiserver" configured

# 測試 logs
$ kubectl -n kube-system logs -f kube-scheduler-k8s-m2
...
I0403 02:30:36.375935 1 server.go:555] Version: v1.10.0
I0403 02:30:36.378208 1 server.go:574] starting healthz server on 127.0.0.1:10251

设定master节点允许Taint:

1
2
3
4
$ kubectl taint nodes node-role.kubernetes.io/master="":NoSchedule --all
node "k8s-m1" tainted
node "k8s-m2" tainted
node "k8s-m3" tainted

 

建立TLS Bootstrapping RBAC 与Secret
由于本次安装启用了TLS认证,因此每个节点的kubelet都必须使用kube-apiserver的CA的凭证后,才能与kube-apiserver进行沟通,而该过程需要手动针对每台节点单独签署凭证是一件繁琐的事情,且一旦节点增加会延伸出管理不易问题;而TLS bootstrapping目标就是解决该问题,通过让kubelet先使用一个预定低权限使用者连接到kube-apiserver,然后在对kube-apiserver申请凭证签署,当授权Token一致时,Node节点的kubelet凭证将由kube-apiserver动态签署提供。具体作法可以参考TLS BootstrappingAuthenticating with Bootstrap Tokens

首先在k8s-m1建立一个变数来产生BOOTSTRAP_TOKEN,并建立bootstrap-kubelet.conf的Kubernetes config文件:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
$ cd /etc/kubernetes/pki
$ export TOKEN_ID=$(openssl rand 3 -hex)
$ export TOKEN_SECRET=$(openssl rand 8 -hex)
$ export BOOTSTRAP_TOKEN=${TOKEN_ID}.${TOKEN_SECRET}
$ export KUBE_APISERVER="https://192.16.35.10:6443"

# bootstrap set cluster
$ kubectl config set-cluster kubernetes \
--certificate-authority=ca.pem \
--embed-certs=true \
--server=${KUBE_APISERVER} \
--kubeconfig=../bootstrap-kubelet.conf

# bootstrap set credentials
$ kubectl config set-credentials tls-bootstrap-token-user \
--token=${BOOTSTRAP_TOKEN} \
--kubeconfig=../bootstrap-kubelet.conf

# bootstrap set context
$ kubectl config set-context tls-bootstrap-token-user@kubernetes \
--cluster=kubernetes \
--user=tls-bootstrap-token-user \
--kubeconfig=../bootstrap-kubelet.conf

# bootstrap use default context
$ kubectl config use-context tls-bootstrap-token-user@kubernetes \
--kubeconfig=../bootstrap-kubelet.conf

 

  • 若想要用手动签署凭证来进行授权的话,可以参考Certificate

接着在k8s-m1建立TLS bootstrap secret来提供自动签证使用:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
$ cat <<EOF | kubectl create -f -
apiVersion: v1
kind: Secret
metadata:
name: bootstrap-token-${TOKEN_ID}
namespace: kube-system
type: bootstrap.kubernetes.io/token
stringData:
token-id: ${TOKEN_ID}
token-secret: ${TOKEN_SECRET}
usage-bootstrap-authentication: "true"
usage-bootstrap-signing: "true"
auth-extra-groups: system:bootstrappers:default-node-token
EOF

secret "bootstrap-token-65a3a9" created

 

k8s-m1建立 TLS Bootstrap Autoapprove RBAC:

1
2
3
4
$ kubectl apply -f "${CORE_URL}/kubelet-bootstrap-rbac.yml.conf"
clusterrolebinding.rbac.authorization.k8s.io "kubelet-bootstrap" created
clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-bootstrap" created
clusterrolebinding.rbac.authorization.k8s.io "node-autoapprove-certificate-rotation" created

 

Kubernetes Nodes
本部分将说明如何建立与设定Kubernetes Node 角色,Node 是主要执行容器实例(Pod)的工作节点。
在开始部署前,先在k8-m1将需要用到的文件复制到所有node节点上:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ cd /etc/kubernetes/pki
$ for NODE in k8s-n1 k8s-n2 k8s-n3; do
echo "--- $NODE ---"
ssh ${NODE} "mkdir -p /etc/kubernetes/pki/"
ssh ${NODE} "mkdir -p /etc/etcd/ssl"
# Etcd
for FILE in etcd-ca.pem etcd.pem etcd-key.pem; do
scp /etc/etcd/ssl/${FILE} ${NODE}:/etc/etcd/ssl/${FILE}
done
# Kubernetes
for FILE in pki/ca.pem pki/ca-key.pem bootstrap-kubelet.conf; do
scp /etc/kubernetes/${FILE} ${NODE}:/etc/kubernetes/${FILE}
done
done

 

部署与设定
在每台node节点下载kubelet.service相关文件来管理kubelet:

1
2
3
4
$ export CORE_URL="https://kairen.github.io/files/manual-v1.10/node"
$ mkdir -p /etc/systemd/system/kubelet.service.d
$ wget "${CORE_URL}/kubelet.service" -O /lib/systemd/system/kubelet.service
$ wget "${CORE_URL}/10-kubelet.conf" -O /etc/systemd/system/kubelet.service.d/10-kubelet.conf

 

  • cluster dnsdomain有改变的话,需要修改10-kubelet.conf

最后每台node节点建立var 存放资讯,然后启动kubelet 服务:

1
2
$ mkdir -p /var/lib/kubelet /var/log/kubernetes
$ systemctl enable kubelet.service && systemctl start kubelet.service

 

验证集群
完成后,在任意一台master节点并通过简单指令验证:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
$ kubectl get csr
NAME AGE REQUESTOR CONDITION
csr-bvz9l 11m system:node:k8s-m1 Approved,Issued
csr-jwr8k 11m system:node:k8s-m2 Approved,Issued
csr-q867w 11m system:node:k8s-m3 Approved,Issued
node-csr-Y-FGvxZWJqI-8RIK_IrpgdsvjGQVGW0E4UJOuaU8ogk 17s system:bootstrap:dca3e1 Approved,Issued
node-csr-cnX9T1xp1LdxVDc9QW43W0pYkhEigjwgceRshKuI82c 19s system:bootstrap:dca3e1 Approved,Issued
node-csr-m7SBA9RAGCnsgYWJB-u2HoB2qLSfiQZeAxWFI2WYN7Y 18s system:bootstrap:dca3e1 Approved,Issued

$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-m1 NotReady master 12m v1.10.0
k8s-m2 NotReady master 11m v1.10.0
k8s-m3 NotReady master 11m v1.10.0
k8s-n1 NotReady node 32s v1.10.0
k8s-n2 NotReady node 31s v1.10.0
k8s-n3 NotReady node 29s v1.10.0

 

Kubernetes Core Addons部署
当完成上面所有步骤后,接着需要部署一些插件,其中如Kubernetes DNSKubernetes Proxy等这种Addons是非常重要的。
Kubernetes Proxy
Kube-proxy是实现Service的关键插件,kube-proxy会在每台节点上执行,然后监听API Server的Service与Endpoint资源物件的改变,然后来依据变化执行iptables来实现网路的转发。这边我们会需要建议一个DaemonSet来执行,并且建立一些需要的Certificates。

k8s-m1下载kube-proxy.yml来建立Kubernetes Proxy Addon:

1
2
3
4
5
6
7
8
9
10
11
12
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-proxy.yml.conf"
serviceaccount "kube-proxy" created
clusterrolebinding.rbac.authorization.k8s.io "system:kube-proxy" created
configmap "kube-proxy" created
daemonset.apps "kube-proxy" created

$ kubectl -n kube-system get po -o wide -l k8s-app=kube-proxy
NAME READY STATUS RESTARTS AGE IP NODE
kube-proxy-8j5w8 1/1 Running 0 29s 192.16.35.16 k8s-n3
kube-proxy-c4zvt 1/1 Running 0 29s 192.16.35.11 k8s-m1
kube-proxy-clpl6 1/1 Running 0 29s 192.16.35.12 k8s-m2
...

 

Kubernetes DNS
Kube DNS是Kubernetes集群内部Pod之间互相沟通的重要Addon,它允许Pod可以通过Domain Name方式来连接Service,其主要由Kube DNS与Sky DNS组合而成,通过Kube DNS监听Service与Endpoint变化,来提供给Sky DNS资讯,已更新解析位址。

k8s-m1下载kube-dns.yml来建立Kubernetes Proxy Addon:

1
2
3
4
5
6
7
8
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-dns.yml.conf"
serviceaccount "kube-dns" created
service "kube-dns" created
deployment.extensions "kube-dns" created

$ kubectl -n kube-system get po -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
kube-dns-654684d656-zq5t8 0/3 Pending 0 1m

 

这边会发现处于Pending状态,是由于Kubernetes Pod Network还未建立完成,因此所有节点会处于NotReady状态,而造成Pod无法被排程分配到指定节点上启动,由于为了解决该问题,下节将说明如何建立Pod Network。

Calico Network 安装与设定
Calico 是一款纯3层的资料中心网路方案(不需要Overlay 网路),Calico 好处是它整合了各种云原生平台,且Calico 在每一个节点利用Linux Kernel 实现高效的vRouter 来负责资料的转发,而当资料中心复杂度增加时,可以用BGP route reflector 来达成。

  • 本次不采用手动方式来建立Calico网路,若想了解可以参考Integration Guide

k8s-m1下载calico.yaml来建立Calico Network:(yaml里的interface网卡名记得改成和宿主机网卡名一致)

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/network/calico.yml.conf"
configmap "calico-config" created
daemonset "calico-node" created
deployment "calico-kube-controllers" created
clusterrolebinding "calico-cni-plugin" created
clusterrole "calico-cni-plugin" created
serviceaccount "calico-cni-plugin" created
clusterrolebinding "calico-kube-controllers" created
clusterrole "calico-kube-controllers" created
serviceaccount "calico-kube-controllers" created

$ kubectl -n kube-system get po -l k8s-app=calico-node -o wide
NAME READY STATUS RESTARTS AGE IP NODE
calico-node-22mbb 2/2 Running 0 1m 192.16.35.12 k8s-m2
calico-node-2qwf5 2/2 Running 0 1m 192.16.35.11 k8s-m1
calico-node-g2sp8 2/2 Running 0 1m 192.16.35.13 k8s-m3
calico-node-hghp4 2/2 Running 0 1m 192.16.35.14 k8s-n1
calico-node-qp6gf 2/2 Running 0 1m 192.16.35.15 k8s-n2
calico-node-zfx4n 2/2 Running 0 1m 192.16.35.16 k8s-n3

 

  • 这边若节点IP与网卡不同的话,请修改calico.yml文件。

k8s-m1下载Calico CLI来查看Calico nodes:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
$ wget https://github.com/projectcalico/calicoctl/releases/download/v3.1.0/calicoctl -O /usr/local/bin/calicoctl
$ chmod u+x /usr/local/bin/calicoctl
$ cat <<EOF > ~/calico-rc
export ETCD_ENDPOINTS="https://192.16.35.11:2379,https://192.16.35.12:2379,https://192.16.35.13:2379"
export ETCD_CA_CERT_FILE="/etc/etcd/ssl/etcd-ca.pem"
export ETCD_CERT_FILE="/etc/etcd/ssl/etcd.pem"
export ETCD_KEY_FILE="/etc/etcd/ssl/etcd-key.pem"
EOF

$ . ~/calico-rc
$ calicoctl node status
Calico process is running.

IPv4 BGP status
+--------------+-------------------+-------+----------+-------------+
| PEER ADDRESS | PEER TYPE | STATE | SINCE | INFO |
+--------------+-------------------+-------+----------+-------------+
| 192.16.35.12 | node-to-node mesh | up | 04:42:37 | Established |
| 192.16.35.13 | node-to-node mesh | up | 04:42:42 | Established |
| 192.16.35.14 | node-to-node mesh | up | 04:42:37 | Established |
| 192.16.35.15 | node-to-node mesh | up | 04:42:41 | Established |
| 192.16.35.16 | node-to-node mesh | up | 04:42:36 | Established |
+--------------+-------------------+-------+----------+-------------+
...

 

查看pending 的pod 是否已执行:

1
2
3
4
$ kubectl -n kube-system get po -l k8s-app=kube-dns
kubectl -n kube-system get po -l k8s-app=kube-dns
NAME READY STATUS RESTARTS AGE
kube-dns-654684d656-j8xzx 3/3 Running 0 10m

 

Gubernets Extra Addons部署
本节说明如何部署一些官方常用的Addons,如Dashboard、Heapster 等。

Dashboard
Dashboard是Kubernetes社区官方开发的仪表板,有了仪表板后管理者就能够通过Web-based方式来管理Kubernetes集群,除了提升管理方便,也让资源视觉化,让人更直觉看见系统资讯的呈现结果。

k8s-m1通过kubectl来建立kubernetes dashboard即可:

1
2
3
4
5
6
7
$ kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/master/src/deploy/recommended/kubernetes-dashboard.yaml
$ kubectl -n kube-system get po,svc -l k8s-app=kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
kubernetes-dashboard-7d5dcdb6d9-j492l 1/1 Running 0 12s

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
kubernetes-dashboard ClusterIP 10.111.22.111 <none> 443/TCP 12s

 

这边会额外建立一个名称为open-api Cluster Role Binding,这仅作为方便测试时使用,在一般情况下不要开启,不然就会直接被存取所有API:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ cat <<EOF | kubectl create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: open-api
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: cluster-admin
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: system:anonymous
EOF

 

  • 注意!管理者可以针对特定使用者来开放API 存取权限,但这边方便使用直接绑在cluster-admin cluster role。

完成后,就可以通过浏览器存取Dashboard https://192.16.35.10:6443/api/v1/namespaces/kube-system/services/https:kubernetes-dashboard:/proxy/
在 1.7 版本以後的 Dashboard 將不再提供所有權限,因此需要建立一個 service account 來綁定 cluster-admin role:

1
2
3
4
$ kubectl -n kube-system create sa dashboard
$ kubectl create clusterrolebinding dashboard --clusterrole cluster-admin --serviceaccount=kube-system:dashboard
$ kubectl -n kube-system describe secrets | sed -rn ‘/\sdashboard-token-/,/^token/{/^token/s#\S+\s+##p}‘
eyJhbGciOiJSUzI1NiIsInR5cCI6IkpXVCJ9.eyJpc3MiOiJrdWJlcm5ldGVzL3NlcnZpY2VhY2NvdW50Iiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9uYW1lc3BhY2UiOiJrdWJlLXN5c3RlbSIsImt1YmVybmV0ZXMuaW8vc2VydmljZWFjY291bnQvc2VjcmV0Lm5hbWUiOiJkYXNoYm9hcmQtdG9rZW4tdzVocmgiLCJrdWJlcm5ldGVzLmlvL3NlcnZpY2VhY2NvdW50L3NlcnZpY2UtYWNjb3VudC5uYW1lIjoiZGFzaGJvYXJkIiwia3ViZXJuZXRlcy5pby9zZXJ2aWNlYWNjb3VudC9zZXJ2aWNlLWFjY291bnQudWlkIjoiYWJmMTFjYzMtZjRlYi0xMWU3LTgzYWUtMDgwMDI3NjdkOWI5Iiwic3ViIjoic3lzdGVtOnNlcnZpY2VhY2NvdW50Omt1YmUtc3lzdGVtOmRhc2hib2FyZCJ9.Xuyq34ci7Mk8bI97o4IldDyKySOOqRXRsxVWIJkPNiVUxKT4wpQZtikNJe2mfUBBD-JvoXTzwqyeSSTsAy2CiKQhekW8QgPLYelkBPBibySjBhJpiCD38J1u7yru4P0Pww2ZQJDjIxY4vqT46ywBklReGVqY3ogtUQg-eXueBmz-o7lJYMjw8L14692OJuhBjzTRSaKW8U2MPluBVnD7M2SOekDff7KpSxgOwXHsLVQoMrVNbspUCvtIiEI1EiXkyCNRGwfnd2my3uzUABIHFhm0_RZSmGwExPbxflr8Fc6bxmuz-_jSdOtUidYkFIzvEWw2vRovPgs3MXTv59RwUw

 

  • 复制token,然后贴到Kubernetes dashboard。注意这边一般来说要针对不同User开启特定存取权限。

技术分享图片

Heapster
Heapster是Kubernetes社区维护的容器集群监控与效能分析工具。Heapster会从Kubernetes apiserver取得所有Node资讯,然后再通过这些Node来取得kubelet上的资料,最后再将所有收集到资料送到Heapster的后台储存InfluxDB,最后利用Grafana来抓取InfluxDB的资料源来进行视觉化。

k8s-m1通过kubectl来建立kubernetes monitor即可:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/kube-monitor.yml.conf"
$ kubectl -n kube-system get po,svc
NAME READY STATUS RESTARTS AGE
...
po/heapster-74fb5c8cdc-62xzc 4/4 Running 0 7m
po/influxdb-grafana-55bd7df44-nw4nc 2/2 Running 0 7m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
...
svc/heapster ClusterIP 10.100.242.225 <none> 80/TCP 7m
svc/monitoring-grafana ClusterIP 10.101.106.180 <none> 80/TCP 7m
svc/monitoring-influxdb ClusterIP 10.109.245.142 <none> 8083/TCP,8086/TCP 7m
···

 

完成后,就可以通过浏览器存取Grafana Dashboard https://192.16.35.10:6443/api/v1/namespaces/kube-system/services/monitoring-grafana/proxy/

技术分享图片

Ingress
Ingress是利用Nginx或HAProxy等负载平衡器来暴露集群内服务的元件,Ingress主要通过设定Ingress规格来定义Domain Name映射Kubernetes内部Service,这种方式可以避免掉使用过多的NodePort问题。

k8s-m1通过kubectl来建立Ingress Controller即可:

1
2
3
4
5
6
$ kubectl create ns ingress-nginx
$ kubectl apply -f "https://kairen.github.io/files/manual-v1.10/addon/ingress-controller.yml.conf"
$ kubectl -n ingress-nginx get po
NAME READY STATUS RESTARTS AGE
default-http-backend-5c6d95c48-rzxfb 1/1 Running 0 7m
nginx-ingress-controller-699cdf846-982n4 1/1 Running 0 7m

 

  • 这里也可以选择Traefik 的Ingress Controller。

测试Ingress 功能
这边先建立一个Nginx HTTP server Deployment 与Service:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
$ kubectl run nginx-dp --image nginx --port 80
$ kubectl expose deploy nginx-dp --port 80
$ kubectl get po,svc
$ cat <<EOF | kubectl create -f -
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: test-nginx-ingress
annotations:
ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: test.nginx.com
http:
paths:
- path: /
backend:
serviceName: nginx-dp
servicePort: 80
EOF

 

通过curl 来进行测试:

1
2
3
4
5
6
7
8
9
10
$ curl 192.16.35.10 -H ‘Host: test.nginx.com‘
<!DOCTYPE html>
<html>
<head>
<title>Welcome to nginx!</title>
...

# 測試其他 domain name 是否會回傳 404
$ curl 192.16.35.10 -H ‘Host: test.nginx.com1‘
default backend - 404

 

Helm Tiller Server
Helm是Kubernetes Chart的管理工具,Kubernetes Chart是一套预先组态的Kubernetes资源套件。其中Tiller Server主要负责接收来至Client的指令,并通过kube-apiserver与Kubernetes集群做沟通,根据Chart定义的内容,来产生与管理各种对应API物件的Kubernetes部署文件(又称为Release)。

首先在k8s-m1安装Helm tool:

1
2
$ wget -qO- https://kubernetes-helm.storage.googleapis.com/helm-v2.8.1-linux-amd64.tar.gz | tar -zx
$ sudo mv linux-amd64/helm /usr/local/bin/

 

另外在所有node机器安裝 socat:

1
$ sudo apt-get install -y socat

 

接着初始化 Helm(这边会安装 Tiller Server):

1
2
3
4
5
6
7
8
9
10
11
12
13
14
$ kubectl -n kube-system create sa tiller
$ kubectl create clusterrolebinding tiller --clusterrole cluster-admin --serviceaccount=kube-system:tiller
$ helm init --service-account tiller
...
Tiller (the Helm server-side component) has been installed into your Kubernetes Cluster.
Happy Helming!

$ kubectl -n kube-system get po -l app=helm
NAME READY STATUS RESTARTS AGE
tiller-deploy-5f789bd9f7-tzss6 1/1 Running 0 29s

$ helm version
Client: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.8.1", GitCommit:"6af75a8fd72e2aa18a2b278cfe5c7a1c5feca7f2", GitTreeState:"clean"}

 

测试Helm 功能
这边部署简单Jenkins 来进行功能测试:

1
2
3
4
5
6
7
8
9
10
11
12
$ helm install --name demo --set Persistence.Enabled=false stable/jenkins
$ kubectl get po,svc -l app=demo-jenkins
NAME READY STATUS RESTARTS AGE
demo-jenkins-7bf4bfcff-q74nt 1/1 Running 0 2m

NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
demo-jenkins LoadBalancer 10.103.15.129 <pending> 8080:31161/TCP 2m
demo-jenkins-agent ClusterIP 10.103.160.126 <none> 50000/TCP 2m

# 取得 admin 帳號的密碼
$ printf $(kubectl get secret --namespace default demo-jenkins -o jsonpath="{.data.jenkins-admin-password}" | base64 --decode);echo
r6y9FMuF2u

 

完成后,就可以通过浏览器存取Jenkins Web http://192.16.35.10:31161

技术分享图片
测试完成后,即可删除:

1
2
3
4
5
6
$ helm ls
NAME REVISION UPDATED STATUS CHART NAMESPACE
demo 1 Tue Apr 10 07:29:51 2018 DEPLOYED jenkins-0.14.4 default

$ helm delete demo --purge
release "demo" deleted

 

更多Helm Apps可以到Kubeapps Hub寻找。
测试集群
SSH进入k8s-m1节点,然后关闭该节点:

1
$ sudo poweroff

 

接着进入到k8s-m2节点,通过kubectl来检查集群是否能够正常执行:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
# 先检查 etcd 状态,可以发现 etcd-0 因為关机而中断
$ kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-1 Healthy {"health": "true"}
etcd-2 Healthy {"health": "true"}
etcd-0 Unhealthy Get https://192.16.35.11:2379/health: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

# 测试是否可以建立 Pod
$ kubectl run nginx --image nginx --restart=Never --port 80
$ kubectl get po
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 22s

 

kubernetes 安装手册(成功版)

标签:复制   nis   over   restart   exce   get   1.7   rev   latest   

原文地址:https://www.cnblogs.com/kuku0223/p/9124988.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!