码迷,mamicode.com
首页 > Web开发 > 详细

《二》Kubernetes集群部署(master)-搭建单集群v1.0

时间:2018-12-06 20:17:29      阅读:251      评论:0      收藏:0      [点我收藏+]

标签:names   ice   back   str   com   tde   mode   sha   网通   

搭建单集群平台的环境规划

技术分享图片

技术分享图片

技术分享图片

多master环境规划

技术分享图片

官方提供的三种部署方式

minikube
Minikube是一个工具,可以在本地快速运行一个单点的Kubernetes,仅用于尝试Kubernetes或日常开发的用户使用。
部署地址:https://kubernetes.io/docs/setup/minikube/

kubeadm
Kubeadm也是一个工具,提供kubeadm init和kubeadm join,用于快速部署Kubernetes集群。
部署地址:https://kubernetes.io/docs/reference/setup-tools/kubeadm/kubeadm/

二进制包---》使用这个二进制包安装部署
推荐,从官方下载发行版的二进制包,手动部署每个组件,组成Kubernetes集群。
下载地址:https://github.com/kubernetes/kubernetes/releases

搭建etcd集群

先登录192.168.1.13上生产证书,然后再将证书拷贝到另外2台节点上
1、使用cfssl来生成自签证书,先下载cfssl工具:

wget https://pkg.cfssl.org/R1.2/cfssl_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssljson_linux-amd64
wget https://pkg.cfssl.org/R1.2/cfssl-certinfo_linux-amd64
chmod +x cfssl_linux-amd64 cfssljson_linux-amd64 cfssl-certinfo_linux-amd64
mv cfssl_linux-amd64 /usr/local/bin/cfssl
mv cfssljson_linux-amd64 /usr/local/bin/cfssljson
mv cfssl-certinfo_linux-amd64 /usr/bin/cfssl-certinfo

2、生成证书
创建以下三个文件

[root@docker etcd]# cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"www": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}

[root@docker etcd]# cat ca-csr.json
{
"CN": "etcd CA",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing"
}
]
}

[root@docker etcd]# cat server-csr.json
{
"CN": "etcd",
"hosts": [
"192.168.1.13",
"192.168.1.23",
"192.168.1.24"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing"
}
]
}

说明:
1、hosts里面的是ip集群
2、过期时间为87600h 10年

生成证书:
1、cfssl gencert -initca ca-csr.json | cfssljson -bare ca -
2、cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=www server-csr.json| cfssljson -bare server

技术分享图片

3、部署etcd集群
二进制包下载地址:https://github.com/coreos/etcd/releases/tag/v3.2.12

3.1 mkdir /opt/etcd/{bin,cfg,ssl} -p
3.2 tar xf etcd-v3.3.10-linux-amd64.tar.gz
3.3 mv etcd-v3.3.10-linux-amd64/{etcd,etcdctl} /opt/etcd/bin/
3.4 [root@docker etcd]# ls /opt/etcd/bin/
etcd etcdctl

创建etcd配置文件:
3.5 [root@docker etcd]# cat /opt/etcd/cfg/etcd
#[Member]
ETCD_NAME="etcd01"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.13:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.13:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.13:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.13:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.13:2380,etcd02=https://192.168.1.23:2380,etcd03=https://192.168.1.24:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

说明:
ETCD_NAME 节点名称
ETCD_DATA_DIR 数据目录
ETCD_LISTEN_PEER_URLS 集群通信监听地址
ETCD_LISTEN_CLIENT_URLS 客户端访问监听地址
ETCD_INITIAL_ADVERTISE_PEER_URLS 集群通告地址
ETCD_ADVERTISE_CLIENT_URLS 客户端通告地址
ETCD_INITIAL_CLUSTER 集群节点地址
ETCD_INITIAL_CLUSTER_TOKEN 集群Token
ETCD_INITIAL_CLUSTER_STATE 加入集群的当前状态,new是新集群,existing表示加入已有集群

systemd管理etcd:
[root@docker etcd]# cat /usr/lib/systemd/system/etcd.service
[Unit]
Description=Etcd Server
After=network.target
After=network-online.target
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/opt/etcd/cfg/etcd
ExecStart=/opt/etcd/bin/etcd \
--name=${ETCD_NAME} \
--data-dir=${ETCD_DATA_DIR} \
--listen-peer-urls=${ETCD_LISTEN_PEER_URLS} \
--listen-client-urls=${ETCD_LISTEN_CLIENT_URLS},http://127.0.0.1:2379 \
--advertise-client-urls=${ETCD_ADVERTISE_CLIENT_URLS} \
--initial-advertise-peer-urls=${ETCD_INITIAL_ADVERTISE_PEER_URLS} \
--initial-cluster=${ETCD_INITIAL_CLUSTER} \
--initial-cluster-token=${ETCD_INITIAL_CLUSTER_TOKEN} \
--initial-cluster-state=new \
--cert-file=/opt/etcd/ssl/server.pem \
--key-file=/opt/etcd/ssl/server-key.pem \
--peer-cert-file=/opt/etcd/ssl/server.pem \
--peer-key-file=/opt/etcd/ssl/server-key.pem \
--trusted-ca-file=/opt/etcd/ssl/ca.pem \
--peer-trusted-ca-file=/opt/etcd/ssl/ca.pem
Restart=on-failure
LimitNOFILE=65536
[Install]
WantedBy=multi-user.target

说明:变量加载了EnvironmentFile 这里面的变量

3.6 把刚才生成的证书拷贝到配置文件中的位置:
cp ca.pem server.pem /opt/etcd/ssl

3.7 查看证书
技术分享图片

启动并设置开启启动:
systemctl start etcd
systemctl enable etcd

搭建另外2台节点
1、拷贝目录
scp -r etcd/ 192.168.1.23:/opt/
scp -r etcd/ 192.168.1.24:/opt/

2、拷贝启动文件
scp /usr/lib/systemd/system/etcd.service 192.168.1.23:/usr/lib/systemd/system/
scp /usr/lib/systemd/system/etcd.service 192.168.1.24:/usr/lib/systemd/system/

3、登录192.168.1.23/192.168.1.24上修改配置文件
[root@docker cfg]# cat etcd
#[Member]
ETCD_NAME="etcd02"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.23:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.23:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.23:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.23:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.13:2380,etcd02=https://192.168.1.23:2380,etcd03=https://192.168.1.24:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

192.168.1.24
[root@docker cfg]# cat etcd
#[Member]
ETCD_NAME="etcd03"
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_PEER_URLS="https://192.168.1.24:2380"
ETCD_LISTEN_CLIENT_URLS="https://192.168.1.24:2379"
#[Clustering]
ETCD_INITIAL_ADVERTISE_PEER_URLS="https://192.168.1.24:2380"
ETCD_ADVERTISE_CLIENT_URLS="https://192.168.1.24:2379"
ETCD_INITIAL_CLUSTER="etcd01=https://192.168.1.13:2380,etcd02=https://192.168.1.23:2380,etcd03=https://192.168.1.24:2380"
ETCD_INITIAL_CLUSTER_TOKEN="etcd-cluster"
ETCD_INITIAL_CLUSTER_STATE="new"

再将etcd启动:systemctl start etcd ;systemctl enable etcd

都部署完成后,检查etcd集群状态

[root@docker etcd]# cd /opt/etcd/ssl/
[root@docker ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem \
--endpoints="https://192.168.1.13:2379,https://192.168.1.23:2379,https://192.168.1.24:2379" cluster-health

member 86ab7347ef3517f6 is healthy: got healthy result from https://192.168.1.23:2379
member b08a644fd7247c5e is healthy: got healthy result from https://192.168.1.13:2379
member d5c3beb629639303 is healthy: got healthy result from https://192.168.1.24:2379
cluster is healthy

如果输出上面信息,就说明集群部署成功。如果有问题第一步先看日志:/var/log/message 或 journalctl -u etcd

Node安装Docker

技术分享图片

在192.168.1.23、192.168.1.24上安装docker

Flannel容器集群网络部署

Overlay Network:覆盖网络,在基础网络上叠加的一种虚拟网络技术模式,该网络中的主机通过虚拟链路连接起来。
VXLAN:将源数据包封装到UDP中,并使用基础网络的IP/MAC作为外层报文头进行封装,然后在以太网上传输,到达目的地后由隧道端点解封装并将数据发送给目
标地址。
Flannel:是Overlay网络的一种,也是将源数据包封装在另一种网络包里面进行路由转发和通信,目前已经支持UDP、VXLAN、AWS VPC和GCE路由等数据转发方
式。全网通信,每个容器的ip段不一致(同一个网段,ip冲突就不好了),使用这个网络可以进行通信。

技术分享图片

技术分享图片

2个ip通信肯定不是同一个IP,即2个不同的网段,灵活性高,部署在任何的网络中

calico 网络

部署Flannel网络

Falnnel要用etcd存储自身一个子网信息(保证每个节点都是唯一的IP网段),所以要保证能成功连接Etcd,写入预定义子网段:
在192.168.1.13-master01 上执行:设置key( /coreos.com/network/config) value(‘{ "Network": "172.17.0.0/16", "Backend": {"Type":

"vxlan"}}‘)

[root@docker ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.13:2379,https://192.168.1.23:2379,https://192.168.1.24:2379" set /coreos.com/network/config ‘{ "Network": "172.17.0.0/16", "Backend": {"Type":

"vxlan"}}‘
{ "Network": "172.17.0.0/16", "Backend": {"Type":
"vxlan"}}

[root@docker ssl]# /opt/etcd/bin/etcdctl --ca-file=ca.pem --cert-file=server.pem --key-file=server-key.pem --endpoints="https://192.168.1.13:2379,https://192.168.1.23:2379,https://192.168.1.24:2379" get /coreos.com/network/config

{ "Network": "172.17.0.0/16", "Backend": {"Type":
"vxlan"}}

1、下载

wget https://github.com/coreos/flannel/releases/download/v0.10.0/flannel-v0.10.0-linux-amd64.tar.gz

2、传输到各个节点上安装
[root@docker k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz 192.168.1.23:/data/tools/k8s/
[root@docker k8s]# scp flannel-v0.10.0-linux-amd64.tar.gz 192.168.1.24:/data/tools/k8s/

以下都是在192.168.1.23安装
3、在192.168.1.23上解压
[root@docker k8s]# tar xf flannel-v0.10.0-linux-amd64.tar.gz

4、[root@docker tools]# mkdir /opt/kubernetes/{cfg,bin,ssl} -p

5、[root@docker k8s]# mv flanneld mk-docker-opts.sh /opt/kubernetes/bin

6、配置Flannel
[root@docker k8s]# cat /opt/kubernetes/cfg/flanneld
FLANNEL_OPTIONS="--etcd-endpoints=https://192.168.1.13:2379,https://192.168.1.23:2379,https://192.168.1.24:2379 -etcd-cafile=/opt/etcd/ssl/ca.pem -etcd-certfile=/opt/etcd/ssl/server.pem -etcd-keyfile=/opt/etcd/ssl/server-key.pem"

7、systemd管理Flannel
[root@docker k8s]# cat /usr/lib/systemd/system/flanneld.service
[Unit]
Description=Flanneld overlay address etcd agent
After=network-online.target network.target
Before=docker.service
[Service]
Type=notify
EnvironmentFile=/opt/kubernetes/cfg/flanneld
ExecStart=/opt/kubernetes/bin/flanneld --ip-masq $FLANNEL_OPTIONS
ExecStartPost=/opt/kubernetes/bin/mk-docker-opts.sh -k DOCKER_NETWORK_OPTIONS -d /run/flannel/subnet.env
Restart=on-failure
[Install]
WantedBy=multi-user.target

8、配置Docker启动指定子网段
只是新增了如下2行(docker分配flanneld的子网):
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS

[root@docker k8s]# cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=https://docs.docker.com
BindsTo=containerd.service
After=network-online.target firewalld.service
Wants=network-online.target
[Service]
Type=notify
EnvironmentFile=/run/flannel/subnet.env
ExecStart=/usr/bin/dockerd $DOCKER_NETWORK_OPTIONS
ExecReload=/bin/kill -s HUP $MAINPID
TimeoutSec=0
RestartSec=2
Restart=always
StartLimitBurst=3
StartLimitInterval=60s
LimitNOFILE=infinity
LimitNPROC=infinity
LimitCORE=infinity
TasksMax=infinity
Delegate=yes
KillMode=process
[Install]
WantedBy=multi-user.target

9、重启flannel和docker:
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

10、检查是否生效:

技术分享图片

需要在192.168.1.24上安装(方法一样)

[root@docker opt]# scp -r kubernetes/ 192.168.1.24:/opt/
[root@docker opt]# scp /usr/lib/systemd/system/flanneld.service 192.168.1.24:/usr/lib/systemd/system/
[root@docker opt]# scp /usr/lib/systemd/system/docker.service 192.168.1.24:/usr/lib/systemd/system/

在192.168.1.24上启动:
systemctl daemon-reload
systemctl start flanneld
systemctl enable flanneld
systemctl restart docker

验证1:ping下192.168.1.23上的网络
[root@docker k8s]# ping 172.17.63.1

验证2:在2个节点上分别新建一个容器,然后再ping ip
结果:可以ping同(2边的容器的IP都不在一个网段)

图示:
技术分享图片

技术分享图片

宿主机Ping 容器的IP也可以通--》达到了全网通的目的,这就是通过flannel 网络实现

在Master节点部署组件

在部署Kubernetes之前一定要确保etcd、flannel、docker是正常工作的,否则先解决问题再继续。
以下操作在master(192.168.1.13)上:
1、生产证书
创建CA证书:
[root@docker master-ca]# pwd
/data/k8s/master-ca
[root@docker master-ca]# cat ca-config.json
{
"signing": {
"default": {
"expiry": "87600h"
},
"profiles": {
"kubernetes": {
"expiry": "87600h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
}
}
}
}

[root@docker master-ca]# cat ca-csr.json
{
"CN": "kubernetes",
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "Beijing",
"ST": "Beijing",
"O": "k8s",
"OU": "System"
}
]
}

执行:
cfssl gencert -initca ca-csr.json | cfssljson -bare ca -

生成apiserver证书
[root@docker master-ca]# cat server-csr.json
{
"CN": "kubernetes",
"hosts": [
"10.0.0.1",
"127.0.0.1",
"192.168.1.13",
"192.168.1.16",
"192.168.1.20",
"192.168.1.30",
"192.168.1.26",
"kubernetes",
"kubernetes.default",
"kubernetes.default.svc",
"kubernetes.default.svc.cluster",
"kubernetes.default.svc.cluster.local"
],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

说明:IP为:master、集群节点(包含vip)
执行:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes server-csr.json |cfssljson -bare server

生成kube-proxy证书
[root@docker master-ca]# cat kube-proxy-csr.json
{
"CN": "system:kube-proxy",
"hosts": [],
"key": {
"algo": "rsa",
"size": 2048
},
"names": [
{
"C": "CN",
"L": "BeiJing",
"ST": "BeiJing",
"O": "k8s",
"OU": "System"
}
]
}

执行:
cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=kubernetes kube-proxy-csr.json | cfssljson -bare kube-proxy

最终生成以下证书文件:
技术分享图片

部署apiserver组件

下载二进制包:https://github.com/kubernetes/kubernetes/blob/master/CHANGELOG-1.12.md 下载这个包
(kubernetes-server-linux-amd64.tar.gz)就够了,包含了所需的所有组件。

技术分享图片

1、mkdir /opt/kubernetes/{bin,cfg,ssl} -p
2、 tar zxvf kubernetes-server-linux-amd64.tar.gz
3、cd kubernetes/server/bin
4、cp kube-apiserver kube-scheduler kube-controller-manager kubectl /opt/kubernetes/bin

创建token文件,用途后面会讲到:
cat /opt/kubernetes/cfg/token.csv

674c457d4dcf2eefe4920d7dbb6b0ddc,kubelet-bootstrap,10001,"system:kubelet-bootstrap"
#第一列:随机字符串,自己可生成 第二列:用户名 第三列:UID 第四列:用户组

创建apiserver配置文件
[root@docker master-ca]# cat /opt/kubernetes/cfg/kube-apiserver
KUBE_APISERVER_OPTS="--logtostderr=true \
--v=4 \
--etcd-servers=https://192.168.1.13:2379,https://192.168.1.23:2379,https://192.168.1.24:2379 \
--bind-address=192.168.1.13 \
--secure-port=6443 \
--advertise-address=192.168.1.13 \
--allow-privileged=true \
--service-cluster-ip-range=10.0.0.0/24 \
--enable-admission-plugins=NamespaceLifecycle,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota,NodeRestriction \
--authorization-mode=RBAC,Node \
--enable-bootstrap-token-auth \
--token-auth-file=/opt/kubernetes/cfg/token.csv \
--service-node-port-range=30000-50000 \
--tls-cert-file=/opt/kubernetes/ssl/server.pem \
--tls-private-key-file=/opt/kubernetes/ssl/server-key.pem \
--client-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-key-file=/opt/kubernetes/ssl/ca-key.pem \
--etcd-cafile=/opt/etcd/ssl/ca.pem \
--etcd-certfile=/opt/etcd/ssl/server.pem \
--etcd-keyfile=/opt/etcd/ssl/server-key.pem"

说明
配置好前面生成的证书,确保能连接etcd
参数说明:
--logtostderr 启用日志
--v 日志等级
--etcd-servers etcd集群地址
--bind-address 监听地址
--secure-port https安全端口
--advertise-address 集群通告地址
--allow-privileged 启用授权
--service-cluster-ip-range Service虚拟IP地址段
--enable-admission-plugins 准入控制模块
--authorization-mode 认证授权,启用RBAC授权和节点自管理
--enable-bootstrap-token-auth 启用TLS bootstrap功能,后面会讲到
--token-auth-file token文件
--service-node-port-range Service Node类型默认分配端口范围

systemd管理apiserver
[root@docker master-ca]# cat /usr/lib/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-apiserver
ExecStart=/opt/kubernetes/bin/kube-apiserver $KUBE_APISERVER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

启动:
systemctl daemon-reload
systemctl enable kube-apiserver
systemctl restart kube-apiserver

错误日志查看:/var/log/message

部署scheduler组件
创建schduler配置文件:
[root@docker master-ca]# cat /opt/kubernetes/cfg/kube-scheduler
KUBE_SCHEDULER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect"

参数说明:
--master 连接本地apiserver
--leader-elect 当该组件启动多个时,自动选举(HA)

systemd管理schduler组件:
[root@docker master-ca]# cat /usr/lib/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-scheduler
ExecStart=/opt/kubernetes/bin/kube-scheduler $KUBE_SCHEDULER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

启动:
systemctl daemon-reload
systemctl enable kube-scheduler
systemctl restart kube-scheduler

部署controller-manager组件
创建controller-manager配置文件:
[root@docker master-ca]# cat /opt/kubernetes/cfg/kube-controller-manager
KUBE_CONTROLLER_MANAGER_OPTS="--logtostderr=true \
--v=4 \
--master=127.0.0.1:8080 \
--leader-elect=true \
--address=127.0.0.1 \
--service-cluster-ip-range=10.0.0.0/24 \
--cluster-name=kubernetes \
--cluster-signing-cert-file=/opt/kubernetes/ssl/ca.pem \
--cluster-signing-key-file=/opt/kubernetes/ssl/ca-key.pem \
--root-ca-file=/opt/kubernetes/ssl/ca.pem \
--service-account-private-key-file=/opt/kubernetes/ssl/ca-key.pem"

systemd管理controller-manager组件:
[root@docker master-ca]# cat /usr/lib/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes
[Service]
EnvironmentFile=/opt/kubernetes/cfg/kube-controller-manager
ExecStart=/opt/kubernetes/bin/kube-controller-manager $KUBE_CONTROLLER_MANAGER_OPTS
Restart=on-failure
[Install]
WantedBy=multi-user.target

启动:
systemctl daemon-reload
systemctl enable kube-controller-manager
systemctl restart kube-controller-manager

所有组件都已经启动成功,通过kubectl工具查看当前集群组件状态:
/opt/kubernetes/bin/kubectl get cs

技术分享图片

设置下环境变量:
vi /etc/profile
export PATH=$PATH:/opt/kubernetes/bin

[root@docker ~]# source /etc/profile

《二》Kubernetes集群部署(master)-搭建单集群v1.0

标签:names   ice   back   str   com   tde   mode   sha   网通   

原文地址:http://blog.51cto.com/jacksoner/2327043

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!