码迷,mamicode.com
首页 > Web开发 > 详细

Kubernetes1.10 ——二进制集群部署

时间:2018-05-25 22:07:36      阅读:1691      评论:0      收藏:0      [点我收藏+]

标签:k8s kubernetes 部署

之前的博文中已经介绍过使用kubeadm自动化安装Kubernetes ,但是由于各个组件都是以容器的方式运行,对于具体的配置细节没有太多涉及,为了更好的理解Kubernetes中各个组件的作用,本篇博文将使用二进制的方式安装Kubernetes集群,对于各个组件的配置做进一步的详细说明。

在1.10版本中,已经逐步废弃掉了非安全端口(默认8080)的连接方式,这里会介绍使用ca证书双向认证的方式来建立集群,配置过程稍复杂。

环境说明

1、两台CentOS7 主机,解析主机名,关闭防火墙,Selinux,同步系统时间:
10.0.0.1 node-1 Master
10.0.0.2 node-2 Node
Master上部署:

  • etcd
  • kube-apiserver
  • kube-controller-manager
  • kube-scheduler

Node上部署:

  • Docker
  • kubelet
  • kube-proxy

2、下载官方的软件包https://github.com/kubernetes/kubernetes/ ,这里我们下载二进制文件,这里我们选择了1.10.2的版本:

  • kubernetes-server-linux-amd64.tar.gz
  • kubernetes-node-linux-amd64.tar

Master部署

由于使用的是二进制包,解压后直接将对应的文件拷贝到执行目录即可:

# tar xf kubernetes-server-linux-amd64.tar.gz
# cd kubernetes/server/bin
# cp `ls|egrep -v "*.tar|*_tag"` /usr/bin/

下面对具体的服务配置进行说明。

1、etcd

etcd服务是Kubernetes集群的核心数据库,在安装各个服务之前需要先安装启动。这里演示的是部署etcd单节点,当然也可以配置3节点的集群。如果想配置更加简单,推荐直接使用yum方式安装

# wget https://github.com/coreos/etcd/releases/download/v3.2.20/etcd-v3.2.20-linux-amd64.tar.gz
# tar xf etcd-v3.2.20-linux-amd64.tar.gz
# cd etcd-v3.2.20-linux-amd64
# cp etcd etcdctl  /usr/bin/
# mkdir /var/lib/etcd
# mkdir /etc/etcd

编辑systemd管理文件:

vim /usr/lib/systemd/system/etcd.service

[Unit]
Description=Etcd Server
After=network.target

[Service]
Type=simple
WorkingDirectory=/var/lib/etcd/
EnvironmentFile=-/etc/etcd/etcd.conf
ExecStart=/usr/bin/etcd

[Install]
WantedBy=multi-user.target

启动服务:

systemctl daemon-reload
systemctl start etcd
systemctl status etcd.service

查看服务状态:

[root@node-1 ~]# netstat -lntp|grep etcd
tcp        0      0 127.0.0.1:2379          0.0.0.0:*               LISTEN      18794/etcd          
tcp        0      0 127.0.0.1:2380          0.0.0.0:*               LISTEN      18794/etcd 

[root@node-1 ~]# etcdctl  cluster-health
member 8e9e05c52164694d is healthy: got healthy result from http://localhost:2379
cluster is healthy

说明: etcd 会启用两个端口,其中2379 是集群的通信端口,2380是服务端口。如果是配置etcd集群,则要修改配置文件,设置监听IP和端口。

2、kube-apiserver

1、编辑systemd的启动文件:

vim /usr/lib/systemd/system/kube-apiserver.service

[Unit]
Description=Kubernetes API Server
Documentation=https://kubernetes.io/docs/concepts/overview
After=network.target
After=etcd.service

[Service]
EnvironmentFile=/etc/kubernetes/apiserver
ExecStart=/usr/bin/kube-apiserver $KUBE_API_ARGS
Restart=on-failure
Type=notify
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2、配置参数文件(需要先创建配置目录):

# cat /etc/kubernetes/apiserver 

KUBE_API_ARGS="--storage-backend=etcd3                --etcd-servers=http://127.0.0.1:2379                --bind-address=0.0.0.0                --secure-port=6443                 --service-cluster-ip-range=10.222.0.0/16                 --service-node-port-range=1-65535                --client-ca-file=/etc/kubernetes/ssl/ca.crt                --tls-private-key-file=/etc/kubernetes/ssl/server.key                 --tls-cert-file=/etc/kubernetes/ssl/server.crt                 --enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,DefaultStorageClass,ResourceQuota                --logtostderr=false                --log-dir=/var/log/kubernetes                --v=2"
  • service-cluster-ip-range是servcies的虚拟IP的IP范围,这里可以自己定义,不能当前的宿主机网段重叠。
  • bind-addres 指定的apiserver监听地址,对应的监听端口是6443,使用的https的方式。
  • client-ca-file 这是认证的相关文件,这预先定义,后面会创建证书文件,并放置到对应的路径。

3、创建日志目录和证书目录,如果没有配文件目录也需要创建:

mkdir /var/log/kubernetes
mkdir /etc/kubernetes
mkdir /etc/kubernetes/ssl

3、kube-controller-manager

1、配置systemd的启动文件:

# cat /usr/lib/systemd/system/kube-controller-manager.service 

[Unit]
Description=Kubernetes Controller Manager 
Documentation=https://kubernetes.io/docs/setup
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/controller-manager
ExecStart=/usr/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2、配置启动参数文件:

# cat /etc/kubernetes/controller-manager 

KUBE_CONTROLLER_MANAGER_ARGS="--master=https://10.0.0.1:6443   --service-account-private-key-file=/etc/kubernetes/ssl/server.key  --root-ca-file=/etc/kubernetes/ssl/ca.crt --kubeconfig=/etc/kubernetes/kubeconfig"

4、kube-scheduler

1、配置systemd启动文件:

# cat /usr/lib/systemd/system/kube-scheduler.service 

[Unit]
Description=Kubernetes Controller Manager 
Documentation=https://kubernetes.io/docs/setup
After=kube-apiserver.service
Requires=kube-apiserver.service

[Service]
EnvironmentFile=/etc/kubernetes/scheduler
ExecStart=/usr/bin/kube-scheduler $KUBE_SCHEDULER_ARGS
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2、配置参数文件:

# cat /etc/kubernetes/scheduler 

KUBE_SCHEDULER_ARGS="--master=https://10.0.0.1:6443 --kubeconfig=/etc/kubernetes/kubeconfig"

5、创建kubeconfig文件

# cat /etc/kubernetes/kubeconfig 

apiVersion: v1
kind: Config
users:
- name: controllermanager
  user:
    client-certificate: /etc/kubernetes/ssl/cs_client.crt
    client-key: /etc/kubernetes/ssl/cs_client.key
clusters:
- name: local
  cluster:
    certificate-authority: /etc/kubernetes/ssl/ca.crt
contexts:
- context:
    cluster: local
    user: controllermanager
  name: my-context
current-context: my-context

6、创建CA证书

1、配置kube-apiserver的CA证书和私钥文件,:

# cd  /etc/kubernetes/ssl/
# openssl genrsa -out ca.key 2048
# openssl req -x509 -new -nodes -key ca.key -subj "/CN=10.0.0.1" -days 5000 -out ca.crt    # CN指定Master的IP地址
# openssl genrsa -out server.key 2048

2、创建master_ssl.cnf文件:

# cat master_ssl.cnf 

[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = kubernetes
DNS.2 = kubernetes.default
DNS.3 = kubernetes.default.svc
DNS.4 = kubernetes.default.svc.cluster.local
DNS.5 = k8s_master
IP.1 = 10.222.0.1                     # ClusterIP 地址
IP.2 = 10.0.0.1                         # master IP地址

3、基于上述文件,创建server.csr和 server.crt文件,执行如下命令:

# openssl req -new -key server.key -subj "/CN=node-1" -config master_ssl.cnf -out server.csr   # CN指定主机名
# openssl x509 -req -in server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -days 5000 -extensions v3_req -extfile master_ssl.cnf -out server.crt

提示: 执行以上命令后会生成6个文件,ca.crt ca.key ca.srl server.crt server.csr server.key。

4、设置kube-controller-manager相关证书:

# cd  /etc/kubernetes/ssl/
# openssl genrsa -out cs_client.key 2048
# openssl req -new -key cs_client.key -subj "/CN=node-1" -out cs_client.csr     # CN指定主机名
# openssl x509 -req -in cs_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out cs_client.crt -days 5000

5、确保/etc/kubernetes/ssl/ 目录下有如下文件:

[root@node-1 ssl]# ll
total 36
-rw-r--r-- 1 root root 1090 May 25 15:34 ca.crt
-rw-r--r-- 1 root root 1675 May 25 15:33 ca.key
-rw-r--r-- 1 root root   17 May 25 15:41 ca.srl
-rw-r--r-- 1 root root  973 May 25 15:41 cs_client.crt
-rw-r--r-- 1 root root  887 May 25 15:41 cs_client.csr
-rw-r--r-- 1 root root 1675 May 25 15:40 cs_client.key
-rw-r--r-- 1 root root 1192 May 25 15:37 server.crt
-rw-r--r-- 1 root root 1123 May 25 15:36 server.csr
-rw-r--r-- 1 root root 1675 May 25 15:34 server.key

7、启动服务:

1、启动kube-apiserver:

#  systemctl daemon-reload
#  systemctl enable kube-apiserver
#  systemctl start kube-apiserver

说明:kube-apiserver 默认会启动两个端口(8080和6443),其中,8080是各个组件之间通信的端口,在新的版本中已经很少使用,kube-apiserver所在的主机一般称为Master, 另一个端口6443是为HTTPS提供身份验证和授权的端口。

2、启动kube-controller-manager:

#  systemctl daemon-reload
#  systemctl enable kube-controller-manager
#  systemctl start kube-controller-manager

说明:此服务会启动一个10252的端口

3、启动kube-scheduler

#  systemctl daemon-reload
#  systemctl enable kube-scheduler
#  systemctl start kube-scheduler

说明: 此服务会启动一个10251的端口

5、启动各项服务时,分别查看对应的日志和启动状态信息,确认服务没有报错

#  systemctl status  KUBE-SERVEICE-NAME

Node 部署

Node节点上部署的服务非常简单,只需要部署 docker、kubelet和kube-proxy服务即可。

先配置如下文件:

# cat /etc/sysctl.d/k8s.conf 

net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1

上传Kubernetes的Node节点二进制包,解压后执行如下命令:

tar xf kubernetes-node-linux-amd64.tar.gz 
cd /kubernetes/node/bin
cp kubectl kubelet  kube-proxy  /usr/bin/

mkdir /var/lib/kubelet
mkdir /var/log/kubernetes
mkdir /etc/kubernetes

1、Docker

1、安装Docker17.03版本:

yum install docker-ce-selinux-17.03.2.ce-1.el7.centos.noarch.rpm  -y
yum install docker-ce-17.03.2.ce-1.el7.centos.x86_64.rpm  -y

2、配置启动参数:

vim /usr/lib/systemd/system/docker.service

...
ExecStart=/usr/bin/dockerd --registry-mirror https://qxx96o44.mirror.aliyuncs.com
...

3、启动:

systemctl daemon-reload
systemctl enable docker
systemctl start docker

2、 创建kubelet证书

每台Node节点上都需要配置kubelet的客户端证书。

复制Master上的ca.crt,ca.key到Node节点上的ssl目录,执行如下命令生成kubelet_client.crt和kubelet_client.csr文件:

# cd /etc/kubernetes/ssl/
# openssl genrsa -out kubelet_client.key 2048
# openssl req -new -key kubelet_client.key -subj "/CN=10.0.0.2" -out kubelet_client.csr      #  CN指定Node节点的IP
# openssl x509 -req -in kubelet_client.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kubelet_client.crt -days 5000

3、kubelet

1、配置启动文件:

# cat /usr/lib/systemd/system/kubelet.service 

[Unit]
Description=Kubernetes API Server
Documentation=https://kubernetes.io/doc
After=docker.service
Requires=docker.service

[Service]
WorkingDirectory=/var/lib/kubelet
ExecStart=/usr/bin/kubelet --kubeconfig=/etc/kubernetes/kubeconfig.yaml --logtostderr=false --log-dir=/var/log/kubernetes --v=2
Restart=on-failure

[Install]
WantedBy=multi-user.target

2、配置文件:

# cat /etc/kubernetes/kubeconfig.yaml 

apiVersion: v1
kind: Config
users:
- name: kubelet
  user:
    client-certificate: /etc/kubernetes/ssl/kubelet_client.crt
    client-key: /etc/kubernetes/ssl/kubelet_client.key
clusters:
- name: local
  cluster: 
    certificate-authority: /etc/kubernetes/ssl/ca.crt
    server: https://10.0.0.1:6443
contexts:
- context:
    cluster: local
    user: kubelet
  name: my-context
current-context: my-context

3、启动服务:

# systemctl daemon-reload
# systemctl start kubelet
# systemctl enable kubelet

4、在master上验证:

[root@node-1 ~]# kubectl get nodes
NAME      STATUS    ROLES     AGE       VERSION
node-2    Ready     <none>    36m       v1.10.2

说明:kubelet充当了一个agent的角色,安装好kubelet就可以在master上查看到节点信息。kubelet的配置文件是一个yaml格式文件,对master的指定需要在配置文件中说明。默认监听10248、10250、10255、4194端口。

4、 kube-proxy

1、创建systemd启动文件:

# cat /usr/lib/systemd/system/kube-proxy.service 

[Unit]
Description=Kubernetes kubelet agent 
Documentation=https://kubernetes.io/doc
After=network.service
Requires=network.service

[Service]
EnvironmentFile=/etc/kubernetes/proxy
ExecStart=/usr/bin/kube-proxy $KUBE_PROXY_ARGS  
Restart=on-failure
LimitNOFILE=65536

[Install]
WantedBy=multi-user.target

2、创建参数文件:

# cat /etc/kubernetes/proxy 

KUBE_PROXY_ARGS="--master=https://10.0.0.1:6443  --kubeconfig=/etc/kubernetes/kubeconfig.yaml"

3、启动服务:

# systemctl daemon-reload
# systemctl start kube-proxy
# systemctl enable kube-proxy

说明:启动服务后默认监听10249,10256.

创建应用

完成上述的部署后,就可以创建应用了,但是在开始前,每个Node节点上必须要有pause的镜像,否则国内由于无法访问谷歌镜像,创建不会成功。
在Node节点执行如下命令,解决镜像问题:

docker pull  mirrorgooglecontainers/pause-amd64:3.1
docker tag mirrorgooglecontainers/pause-amd64:3.1 k8s.gcr.io/pause-amd64:3.1

下面会用一个创建简单的应用,来验证我们的集群是否能正常工作。

创建一个nginx的应用

1、编辑nginx.yaml文件:

apiVersion: v1
kind: ReplicationController
metadata:
  name: myweb
spec:
  replicas: 2        
  selector:
    app: myweb
  template:
    metadata:
      labels:
        app: myweb
    spec:
      containers:
        - name: myweb
          image: nginx
          ports:
          - containerPort: 80

2、执行:

# kubectl create -f nginx.yaml

3、查看状态:

[root@node-1 ~]# kubectl get rc
NAME      DESIRED   CURRENT   READY     AGE
myweb     2         2         2         3h

[root@node-1 ~]# kubectl get pods
NAME          READY     STATUS    RESTARTS   AGE
myweb-qtgrv   1/1       Running   0          1h
myweb-z9d2c   1/1       Running   0          1h

[root@node-2 ~]# docker ps|grep nginx
067db96d0c97        nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884   "nginx -g ‘daemon ..."   About an hour ago   Up About an hour                        k8s_myweb_myweb-qtgrv_default_3213ec67-5fef-11e8-9e43-000c295f81fb_0
dd8f7458e410        nginx@sha256:0fb320e2a1b1620b4905facb3447e3d84ad36da0b2c8aa8fe3a5a81d1187b884   "nginx -g ‘daemon ..."   About an hour ago   Up About an hour                        k8s_myweb_myweb-z9d2c_default_3214600e-5fef-11e8-9e43-000c295f81fb_0

4、创建一个service,映射到本地端口:

# cat nginx-service.yaml 

apiVersion: v1
kind: Service
metadata: 
  name: myweb
spec:
  type: NodePort      # 定义外网访问模式
  ports:
    - port: 80
      nodePort: 30001   # 外网访问的端口,映射的本地宿主机端口
  selector:
    app: myweb

# 创建service

# kubectl create -f nginx-service.yaml

# 验证:

[root@node-1 ~]# kubectl get services
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.222.0.1     <none>        443/TCP        1d
myweb        NodePort    10.222.35.97   <none>        80:30001/TCP   1h

5、会在所有安装proxy服务的节点上映射一个30001的端口,访问此端口就可以访问到nginx的默认起始页。

# netstat -lntp|grep 30001
tcp6       0      0 :::30001                :::*                    LISTEN      7713/kube-proxy  

Kubernetes1.10 ——二进制集群部署

标签:k8s kubernetes 部署

原文地址:http://blog.51cto.com/tryingstuff/2120374

(0)
(2)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!