标签:dom pause inf 文件 names 文件下载 ann url nts
说明:本文档涉及docker镜像,yaml文件下载地址链接:https://pan.baidu.com/s/1QuVelCG43_VbHiOs04R3-Q 密码:70q2
本文只是作为一个安装记录
1. 环境
1.1 服务器信息
主机名 | IP地址 | os 版本 | 节点 | |
k8s01 | 172.16.50.131 | CentOS Linux release 7.4.1708 (Core) | master | |
k8s02 | 172.16.50.132 | CentOS Linux release 7.4.1708 (Core) | master | |
k8s03 | 172.16.50.104 | CentOS Linux release 7.4.1708 (Core) | master | |
k8s04 | 172.16.50.111 | CentOS Linux release 7.4.1708 (Core) | node |
1.2 软件版本信息
名称 | 版本 | |
kubernetes | v1.10.4 | |
docker | 1.13.1 |
2. Master部署
2.1 服务器初始化
基础软件安装
yum install vim net-tools git -y
关闭selinux
编辑 /etc/sysconfig/selinux SELINUX=disabled
关闭firewall防火墙
systemctl disable firewalld && systemctl stop firewalld
配置 k8s01 免密码登录所有节点
# ssh-keygen Generating public/private rsa key pair. Enter file in which to save the key (/root/.ssh/id_rsa): Enter passphrase (empty for no passphrase): Enter same passphrase again: Your identification has been saved in /root/.ssh/id_rsa. Your public key has been saved in /root/.ssh/id_rsa.pub. The key fingerprint is: SHA256:uKdVyzKOYp1YiuvmKuYQDice4UX2aKbzAxmzdeou3uo root@k8s01 The key's randomart image is: +---[RSA 2048]----+ | o | | o o | | + * o | |. % o . | |+O.. . S . | |++* .. o . | |.o = =..= o | |oo= * o* o | |BE*= .o . | +----[SHA256]-----+ [root@k8s01 ~]# for i in 131 132 104 111 ; do ssh-copy-id root@172.16.50.$i ; done
更新服务器后重启
yum upgrade -y && reboot
上传v1.10.4.zip压缩包到服务器 k8s01 /data目录
# unzip v1.10.4.zip && cd v1.10.4 Archive: v1.10.4.zip creating: v1.10.4/ creating: v1.10.4/images/ inflating: v1.10.4/images/etcd-amd64_3.1.12.tar inflating: v1.10.4/images/flannel_v0.10.0-amd64.tar inflating: v1.10.4/images/heapster-amd64_v1.5.3.tar inflating: v1.10.4/images/k8s-dns-dnsmasq-nanny-amd64_1.14.8.tar inflating: v1.10.4/images/k8s-dns-kube-dns-amd64_1.14.8.tar inflating: v1.10.4/images/k8s-dns-sidecar-amd64_1.14.8.tar inflating: v1.10.4/images/kube-apiserver-amd64_v1.10.4.tar inflating: v1.10.4/images/kube-controller-manager-amd64_v1.10.4.tar inflating: v1.10.4/images/kube-proxy-amd64_v1.10.4.tar inflating: v1.10.4/images/kube-scheduler-amd64_v1.10.4.tar inflating: v1.10.4/images/kubernetes-dashboard-amd64_v1.8.3.tar inflating: v1.10.4/images/pause-amd64_3.1.tar creating: v1.10.4/pkg/ extracting: v1.10.4/pkg/kubeadm-1.10.4-0.x86_64.rpm inflating: v1.10.4/pkg/kubectl-1.10.4-0.x86_64.rpm extracting: v1.10.4/pkg/kubelet-1.10.4-0.x86_64.rpm inflating: v1.10.4/pkg/kubernetes-cni-0.6.0-0.x86_64.rpm creating: v1.10.4/yaml/ inflating: v1.10.4/yaml/deploy.yaml inflating: v1.10.4/yaml/grafana.yaml inflating: v1.10.4/yaml/heapster.yaml inflating: v1.10.4/yaml/influxdb.yaml inflating: v1.10.4/yaml/kube-flannel.yaml inflating: v1.10.4/yaml/kubernetes-dashboard.yaml
创建配置文件目录
# mkdir config && cd config/
创建k8s.conf 内容如下:
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
分发到所有主机
for i in 131 132 104 111 ; do scp k8s.conf root@172.16.50.$i:/etc/sysctl.d/k8s.conf ; done
生效
for i in 131 132 104 111 ; do ssh root@172.16.50.$i "modprobe br_netfilter && sysctl -p /etc/sysctl.d/k8s.conf " ; done net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1
创建kubernetes yum源文件 kubernetes.repo,内容如下:
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
这里使用阿里云的yum源,免×××
分发到各个服务器
for i in 131 132 104 111 ; do scp kubernetes.repo root@172.16.50.$i:/etc/yum.repos.d/kubernetes.repo ; done
配置hosts
for i in 131 132 104 111 ; do ssh root@172.16.50.$i 'echo "172.16.50.131 k8s01" >> /etc/hosts && echo "172.16.50.132 k8s02" >> /etc/hosts && echo "172.16.50.104 k8s03" >> /etc/hosts && echo "172.16.50.111 k8s04" >> /etc/hosts' ; done
所有服务器安装docker及 kubernetes 组件
yum install docker -y && systemctl start docker.service && systemctl status docker.service && systemctl enable docker.service && yum install kubeadm kubectl kubelet docker -y && systemctl enable kubelet
分发dock镜像
cd ../images/ && for i in 131 132 104 111 ; do scp ./* root@172.16.50.$i:/mnt ; done
所有主机执行,导入docker镜像
for j in `ls /mnt`; do docker load --input /mnt/$j ; done
在主机k8s01上手动生成证书
git clone && cd k8s-tls/
分发可执行文件到所有服务器
for i in 131 132 104 111 ; do scp ./bin/* root@172.16.50.$i:/usr/bin/ ; done
编辑 apiserver.json 文件, 修改之后内容如下:
{ "CN": "kube-apiserver", "hosts": [ "172.16.50.131", "172.16.50.132", "172.16.50.104", "k8s01", "k8s02", "k8s03", "10.96.0.1", "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 } }
运行 ./run.sh 生成证书
进入到 /etc/kubernetes/pki/ 目录
编辑 node.sh 文件
ip="172.16.50.131" NODE="k8s01"
编辑 kubelet.json 文件
"CN": "system:node:k8s01",
执行 ./node.sh 生成配置文件
进入到 /data/v1.10.4/config 目录 ; 创建config.yaml 文件 内容如下:
apiVersion: kubeadm.k8s.io/v1alpha1 kind: MasterConfiguration kubernetesVersion: v1.10.4 networking: podSubnet: 10.244.0.0/16 apiServerCertSANs: - k8s01 - k8s02 - k8s03 - 172.16.50.131 - 172.16.50.132 - 172.16.50.104 - 172.16.50.227 apiServerExtraArgs: endpoint-reconciler-type: "lease" etcd: endpoints: - http://172.16.50.132:2379 - http://172.16.50.131:2379 - http://172.16.50.104:2379 token: "deed3a.b3542929fcbce0f0" tokenTTL: "0"
在主机k8s01上初始化集群
# kubeadm init --config config.yaml [init] Using Kubernetes version: v1.10.4 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [preflight] Starting the kubelet service [certificates] Using the existing ca certificate and key. [certificates] Using the existing apiserver certificate and key. [certificates] Using the existing apiserver-kubelet-client certificate and key. [certificates] Using the existing sa key. [certificates] Using the existing front-proxy-ca certificate and key. [certificates] Using the existing front-proxy-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/admin.conf" [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/kubelet.conf" [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Using existing up-to-date KubeConfig file: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 15.506913 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node k8s01 as master by adding a label and a taint [markmaster] Master k8s01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: deed3a.b3542929fcbce0f0 [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.16.50.131:6443 --token deed3a.b3542929fcbce0f0 --discovery-token-ca-cert-hash sha256:0334022c7eb4f2b20865f1784c64b1e81ad87761b9e8ffd50ecefabca5cfad5c
分发证书文件到k8s02 k8s03 服务器
for i in 131 132 104 111 ; do ssh root@172.16.50.$i "mkdir /etc/kubernetes/pki/ " ; done for i in 132 104 ; do scp /etc/kubernetes/pki/* root@172.16.50.$i:/etc/kubernetes/pki/ ; done
分发 config.yaml 文件到k8s02 k8s03 服务器
for i in 132 104 ; do scp config.yaml root@172.16.50.$i:/mnt ; done
后续操作
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
在主机k8s02上初始化集群
cd /etc/kubernetes/pki/
编辑 node.sh 文件
ip="172.16.50.132" NODE="k8s02"
编辑 kubelet.json 文件
"CN": "system:node:k8s02",
生成配置文件
./node.sh
初始化集群
kubeadm init --config /mnt/config.yaml
同样方法初始化节点k8s03
加入 node 节点
kubeadm join 172.16.50.131:6443 --token deed3a.b3542929fcbce0f0 --discovery-token-ca-cert-hash sha256:0334022c7eb4f2b20865f1784c64b1e81ad87761b9e8ffd50ecefabca5cfad5c
这里未做kubernetes api 的负载均衡器 直接加入 master k8s01节点
kubernetes V1.10.4 集群部署 (手动生成证书)
标签:dom pause inf 文件 names 文件下载 ann url nts
原文地址:http://blog.51cto.com/11889458/2130621