标签:discovery table following health url group host pos 需要
一、介绍Kubernetes 是当先炙手可热的技术,它已然成为可开源界的PASS管理平台的标准,当下文章对大多数是对X86平台搭建Kubernetes平台,下面笔者进行在LinuxONE上搭建开源的Kubernetes平台。
搭建K8S 平台主流的有两种方法,
1. 环境
系统版本 | IP地址 | 主机名 |
---|---|---|
ubuntu1~18.04.1 | 172.16.35.140 | master |
ubuntu1~18.04.1 | woker-1 |
2.安装docker
安装基础的包
apt-get install apt-transport-https ca-certificates curl gnupg-agent software-properties-common
添加官方的key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
添加docker 源
sudo add-apt-repository "deb [arch=s390x] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
安装docker
apt-get update;apt-get install docker-ce
3、安装kubelet、kubeadm
添加源
apt-get update && apt-get install -y apt-transport-https
curl https://mirrors.aliyun.com/kubernetes/apt/doc/apt-key.gpg | apt-key add -
cat <<EOF >/etc/apt/sources.list.d/kubernetes.list
deb https://mirrors.aliyun.com/kubernetes/apt/ kubernetes-xenial main
EOF
apt-get update
apt-get install -y kubelet kubeadm kubec
4、利用kubeadm 初始化环境
再此之前需要自行做好下面准备
初始化环境:
1、基于主机名通讯
2、时间同步
3、防火墙关闭
4、swapoff -a && sysctl -w vm.swappiness=0 && sysctl -w net.ipv4.ip_forward=1 \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-ip6tables \
echo 1 > /proc/sys/net/bridge/bridge-nf-call-iptables
查看kubeadm 需要哪些基础的docker image
root@master:/etc/apt# kubeadm config images list
W0321 08:51:12.828065 19587 version.go:101] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: dial tcp: lookup dl.k8s.io on 127.0.0.53:53: server misbehaving
W0321 08:51:12.828143 19587 version.go:102] falling back to the local client version: v1.17.4
W0321 08:51:12.828250 19587 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0321 08:51:12.828275 19587 validation.go:28] Cannot validate kubelet config - no validator is available
k8s.gcr.io/kube-apiserver:v1.17.4
k8s.gcr.io/kube-controller-manager:v1.17.4
k8s.gcr.io/kube-scheduler:v1.17.4
k8s.gcr.io/kube-proxy:v1.17.4
k8s.gcr.io/pause:3.1
k8s.gcr.io/etcd:3.4.3-0
k8s.gcr.io/coredns:1.6.5
可以看到我们已经罗列出了需要的docker image ,由于不可描述的因素,我们无法直接访问k8s.gcr.io
所以需要我们自行下载这几个镜像,我已经把镜像上传到了我的docker hub, 大家可以自己pull
docker pull erickshi/kube-apiserver-s390x:v1.17.4
docker pull erickshi/kube-scheduler-s390x:v1.17.4
docker pull erickshi/kube-controller-manager-s390x:v1.17.4
docker pull erickshi/pause-s390x:3.1
docker pull erickshi/coredns:s390x-1.6.5
docker pull erickshi/etcd:3.4.3-0
docker pull erickshi/pause:3.1
下载完后,我们需要进行更改成和我们列出来一样的名字,因为kubeadm 会去下载
docker tag erickshi/kube-apiserver-s390x:v1.17.4 k8s.gcr.io/kube-apiserver:v1.17.4
docker tag erickshi/kube-scheduler-s390x:v1.17.4 k8s.gcr.io/kube-scheduler:v1.17.4
docker tag erickshi/kube-controller-manager-s390x:v1.17.4 k8s.gcr.io/kube-controller-manager:v1.17.4
docker tag erickshi/pause-s390x:3.1 k8s.gcr.io/pause:3.1
docker tag erickshi/etcd-s390x:3.4.3-0 k8s.gcr.io/etcd:3.4.3-0
docker tags erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5
docker tag erickshi/coredns:s390x-1.6.5 k8s.gcr.io/coredns:1.6.5
下面正式初始化集群
root@master:~# kubeadm init --kubernetes-version=v1.17.4 --service-cidr=10.96.0.0/12
W0321 09:57:23.233367 9597 validation.go:28] Cannot validate kube-proxy config - no validator is available
W0321 09:57:23.233401 9597 validation.go:28] Cannot validate kubelet config - no validator is available
[init] Using Kubernetes version: v1.17.4
[preflight] Running pre-flight checks
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using ‘kubeadm config images pull‘
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Starting the kubelet
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 172.16.35.140]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master localhost] and IPs [172.16.35.140 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master localhost] and IPs [172.16.35.140 127.0.0.1 ::1]
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
W0321 09:57:32.529825 9597 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[control-plane] Creating static Pod manifest for "kube-scheduler"
W0321 09:57:32.530693 9597 manifests.go:214] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 12.001699 seconds
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.17" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --upload-certs
[mark-control-plane] Marking the node master as control-plane by adding the label "node-role.kubernetes.io/master=‘‘"
[mark-control-plane] Marking the node master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: rp81u6.x7rky04rds2knxb8
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
[kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
kubeadm join 172.16.35.140:6443 --token rp81u6.x7rky04rds2knxb8 --discovery-token-ca-cert-hash sha256:ff32332337f679859b4c34a888c42c963b86148f3ede24bf980a435183beb4be
可以看到已经出初始化成功了,我们需要把用户的认证文件,拷贝到相应的位置
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
下面查看当前环境
root@master:/etc/apt# kubectl get node
NAME STATUS ROLES AGE VERSION
master NotReady master 159m v1.17.4
root@master:/etc/apt# kubectl get cs
NAME STATUS MESSAGE ERROR
scheduler Healthy ok
controller-manager Healthy ok
etcd-0 Healthy {"health":"true"}
下面我们查看托guan于k8s 的服务
root@master:/etc/apt# kubectl get pod --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-6955765f44-gfjk2 0/1 Pending 0 159m
kube-system coredns-6955765f44-l25vq 0/1 Pending 0 159m
kube-system etcd-master 1/1 Running 0 159m
kube-system kube-apiserver-master 1/1 Running 0 159m
kube-system kube-controller-manager-master 1/1 Running 0 159m
kube-system kube-proxy-xfw6v 1/1 Running 0 159m
kube-system kube-scheduler-master 1/1 Running 0 159m
下面装flanal 网络
首先下载flannel image
wget https://github.com/coreos/flannel/releases/download/v0.12.0/flanneld-v0.12.0-s390x.docker
docker load < flanneld-v0.12.0-s390x.docker
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
查看当前的node
root@master:~# kubectl get node
NAME STATUS ROLES AGE VERSION
master Ready master 2m31s v1.17.4
root@master:~#
标签:discovery table following health url group host pos 需要
原文地址:https://blog.51cto.com/shyln/2480704