在这里,我会展示如何安装一个kubernetes cluster,包含1个master 2个minions。
环境需求:
centos7 64位系统 三台机器
master:192.168.5.131
minions:192.168.5.132
minions:192.168.5.133
kubernetes 的组件:
etcd
flannel
kube-apiserver
kube-controller-manager
kube-scheduler
kubelet
kube-proxy
一、部署到centos7
图以后再配
先决条件
每台机器禁用iptables 避免和docker 的iptables冲突:
$ systemctl stop firewalld $ systemctl disable firewalld
2. 安装NTP并确保正常运行
$ yum -y install ntp $ systemctl start ntpd $ systemctl enable ntpd
3. 2个minions机器安装docker
yum install docker -y yum update -y reboot
CentOS系统,使用devicemapper作为存储后端,初始安装docker 会使用loopback, 导致docker启动报错。需要update之后再启动
ps aux|grep docker 结果如下
/usr/bin/docker -d --selinux-enabled --storage-driver devicemapper --storage-opt dm.fs=xfs --storage-opt dm.thinpooldev=/dev/mapper/VolGroup00-docker--pool
二、安装kubernetes master
以下步骤均在master上执行
通过yum安装etcd和kubernetes
yum -y install etcd kubernetes
2.修改配置文件/etc/etcd/etcd.conf,确保etcd监听所有地址,修改如下:
ETCD_NAME=default ETCD_DATA_DIR="/var/lib/etcd/default.etcd" ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"
3.修改配置文件/etc/kubernetes/apiserver,修改如下:
KUBE_API_ADDRESS="--address=0.0.0.0" KUBE_API_PORT="--port=8080" KUBELET_PORT="--kubelet_port=10250" KUBE_ETCD_SERVERS="--etcd_servers=http://127.0.0.1:2379" KUBE_SERVICE_ADDRESSES="--portal_net=10.254.0.0/16" KUBE_ADMISSION_CONTROL="--admission_control=NamespaceAutoProvision,LimitRanger,ResourceQuota" KUBE_API_ARGS=""
4.修改配置文件/etc/kubernetes/controller-manager,定义minions ip地址
KUBELET_ADDRESSES="--machines=192.168.5.132,192.168.5.133"
5.启动服务
for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
6.定义flannel网络配置到etcd,这个配置会推送到各个minions的flannel服务上
etcdctl mk /coreos.com/network/config ‘{"Network":"172.17.0.0/16"}‘
三、安装minions
以下操作均在minions1,minions2上执行
安装flannel和kubernetes
yum -y install flannel kubernetes
2. 为etcd服务配置flannel,修改配置文件 /etc/sysconfig/flanneld
FLANNEL_ETCD="http://192.168.5.131:2379"
3.修改kubernetes配置文件,指定master。/etc/kubernetes/config
KUBE_MASTER="--master=http://192.168.5.131:8080"
4.配置kubelet服务。/etc/kubernetes/kubelet
minions1:
### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.5.132" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.5.132" # location of the api-server KUBELET_API_SERVER="--api_servers=http://192.168.5.131:8080" # Add your own! KUBELET_ARGS=""
minions2:
### # kubernetes kubelet (minion) config # The address for the info server to serve on (set to 0.0.0.0 or "" for all interfaces) KUBELET_ADDRESS="--address=192.168.5.133" # The port for the info server to serve on KUBELET_PORT="--port=10250" # You may leave this blank to use the actual hostname KUBELET_HOSTNAME="--hostname_override=192.168.5.133" # location of the api-server KUBELET_API_SERVER="--api_servers=http://192.168.5.131:8080" # Add your own! KUBELET_ARGS=""
5.启动服务
for SERVICES in kube-proxy kubelet docker flanneld; do systemctl restart $SERVICES systemctl enable $SERVICES systemctl status $SERVICES done
6.在每个minions可以看到2块网卡:docker0和flannel0,这2块网卡的ip在不同的机器ip地址不同
minion1:
ip a | grep flannel | grep inet inet 172.17.20.0/16 scope global flannel0
minion2:
$ ip a | grep flannel | grep inet inet 172.17.21.0/16 scope global flannel0
7.现在登陆master,确认minions的状态
[root@k8s_master ~]# kubectl get nodes NAME LABELS STATUS 192.168.5.132 kubernetes.io/hostname=192.168.5.132 Ready 192.168.5.133 kubernetes.io/hostname=192.168.5.133 Ready
太棒了!kubernetes的集群就配置完成了,下面让我们开始弄pod
在创建pod如果有报错为
is forbidden: no API token found for service account default/default, retry after the token is automatically created and added to the service account
如果你在apiserver配置如下:
# default admission control policies# KUBE_ADMISSION_CONTROL="--admission_control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"
需要去掉
SecurityContextDeny,ServiceAccount
原文地址:http://foxhound.blog.51cto.com/1167932/1684391