标签:NPU open 压缩 sans amp sudo nal adc ane
1. 环境1.1 服务器信息
主机名 | IP地址 | os 版本 | 节点 |
k8s01 | 172.16.50.131 | CentOS Linux release 7.4.1708 (Core) | mmaster node |
1.2 软件版本信息
名称 | 版本 | ||
kubernetes | v1.10.4 | ||
docker | 1.13.1 |
2. 部署
2.1 服务器初始化
基础软件安装
yum install vim net-tools git -y
关闭selinux
编辑 /etc/sysconfig/selinux SELINUX=disabled
关闭firewall防火墙
systemctl disable firewalld && systemctl stop firewalld
更新服务器后重启
yum update -y && reboot
修改内核参数
创建/etc/sysctl.d/k8s.conf文件
net.bridge.bridge-nf-call-ip6tables = 1 net.bridge.bridge-nf-call-iptables = 1 sysctl -p /etc/sysctl.d/k8s.conf # 生效
2.2 kubeadm,kubectl,kubelet ,docker 安装
2.2.1 添加yum源
创建/etc/yum.repos.d/k8s.repo文件
[kubernetes] name=Kubernetes baseurl=https://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64/ enabled=1 gpgcheck=1 repo_gpgcheck=1 gpgkey=https://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg https://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
2.2.2 安装
yum install kubeadm kubectl kubelet docker -y systemctl enable docker && systemctl enable kubelet
2.3 导入docker镜像
镜像包列表
[root@k8s01 images]# ll total 1050028 -rw-r--r-- 1 root root 193461760 Jun 14 11:15 etcd-amd64_3.1.12.tar -rw-r--r-- 1 root root 45306368 Jun 14 11:15 flannel_v0.10.0-amd64.tar -rw-r--r-- 1 root root 75337216 Jun 14 11:15 heapster-amd64_v1.5.3.tar -rw-r--r-- 1 root root 41239040 Jun 14 11:15 k8s-dns-dnsmasq-nanny-amd64_1.14.8.tar -rw-r--r-- 1 root root 50727424 Jun 14 11:15 k8s-dns-kube-dns-amd64_1.14.8.tar -rw-r--r-- 1 root root 42481152 Jun 14 11:15 k8s-dns-sidecar-amd64_1.14.8.tar -rw-r--r-- 1 root root 225355776 Jun 14 11:16 kube-apiserver-amd64_v1.10.4.tar -rw-r--r-- 1 root root 148135424 Jun 14 11:16 kube-controller-manager-amd64_v1.10.4.tar -rw-r--r-- 1 root root 98951168 Jun 14 11:16 kube-proxy-amd64_v1.10.4.tar -rw-r--r-- 1 root root 102800384 Jun 14 11:16 kubernetes-dashboard-amd64_v1.8.3.tar -rw-r--r-- 1 root root 50658304 Jun 14 11:16 kube-scheduler-amd64_v1.10.4.tar -rw-r--r-- 1 root root 754176 Jun 14 11:16 pause-amd64_3.1.tar
导入
for j in `ls ./`; do docker load --input ./$j ; done
查看镜像
docker images REPOSITORY TAG IMAGE ID CREATED SIZE k8s.gcr.io/kube-proxy-amd64 v1.10.4 3f9ff47d0fca 7 days ago 97.1 MB k8s.gcr.io/kube-controller-manager-amd64 v1.10.4 1a24f5586598 7 days ago 148 MB k8s.gcr.io/kube-apiserver-amd64 v1.10.4 afdd56622af3 7 days ago 225 MB k8s.gcr.io/kube-scheduler-amd64 v1.10.4 6fffbea311f0 7 days ago 50.4 MB k8s.gcr.io/heapster-amd64 v1.5.3 f57c75cd7b0a 6 weeks ago 75.3 MB k8s.gcr.io/etcd-amd64 3.1.12 52920ad46f5b 3 months ago 193 MB k8s.gcr.io/kubernetes-dashboard-amd64 v1.8.3 0c60bcf89900 4 months ago 102 MB quay.io/coreos/flannel v0.10.0-amd64 f0fad859c909 4 months ago 44.6 MB k8s.gcr.io/k8s-dns-dnsmasq-nanny-amd64 1.14.8 c2ce1ffb51ed 5 months ago 41 MB k8s.gcr.io/k8s-dns-sidecar-amd64 1.14.8 6f7f2dc7fab5 5 months ago 42.2 MB k8s.gcr.io/k8s-dns-kube-dns-amd64 1.14.8 80cc5ea4b547 5 months ago 50.5 MB k8s.gcr.io/pause-amd64 3.1 da86e6ba6ca1 5 months ago 742 kB
2.4 证书生成
拉取github脚本
git clone && cd k8s-tls/ && chmod +x run.sh
编辑apiserver.json文件
{ "CN": "kube-apiserver", "hosts": [ "172.16.50.131", #本机IP地址 "10.96.0.1", "k8s01", #本机主机名 "kubernetes", "kubernetes.default", "kubernetes.default.svc", "kubernetes.default.svc.cluster", "kubernetes.default.svc.cluster.local" ], "key": { "algo": "rsa", "size": 2048 } }
执行
./run.sh
查看证书
ll /etc/kubernetes/pki/ total 48 -rw-r--r-- 1 root root 1403 Jun 14 11:34 apiserver.crt -rw------- 1 root root 1679 Jun 14 11:34 apiserver.key -rw-r--r-- 1 root root 1257 Jun 14 11:34 apiserver-kubelet-client.crt -rw------- 1 root root 1675 Jun 14 11:34 apiserver-kubelet-client.key -rw-r--r-- 1 root root 1143 Jun 14 11:34 ca.crt -rw------- 1 root root 1675 Jun 14 11:34 ca.key -rw-r--r-- 1 root root 1143 Jun 14 11:34 front-proxy-ca.crt -rw------- 1 root root 1679 Jun 14 11:34 front-proxy-ca.key -rw-r--r-- 1 root root 1208 Jun 14 11:34 front-proxy-client.crt -rw------- 1 root root 1675 Jun 14 11:34 front-proxy-client.key -rw-r--r-- 1 root root 891 Jun 14 11:34 sa.key -rw-r--r-- 1 root root 272 Jun 14 11:34 sa.pub
2.5 初始化master
kubeadm init --apiserver-advertise-address=172.16.50.131 --pod-network-cidr=10.244.0.0/16 --kubernetes-version=v1.10.4 [init] Using Kubernetes version: v1.10.4 [init] Using Authorization modes: [Node RBAC] [preflight] Running pre-flight checks. [WARNING FileExisting-crictl]: crictl not found in system path Suggestion: go get github.com/kubernetes-incubator/cri-tools/cmd/crictl [preflight] Starting the kubelet service [certificates] Using the existing ca certificate and key. [certificates] Using the existing apiserver certificate and key. [certificates] Using the existing apiserver-kubelet-client certificate and key. [certificates] Using the existing sa key. [certificates] Using the existing front-proxy-ca certificate and key. [certificates] Using the existing front-proxy-client certificate and key. [certificates] Generated etcd/ca certificate and key. [certificates] Generated etcd/server certificate and key. [certificates] etcd/server serving cert is signed for DNS names [localhost] and IPs [127.0.0.1] [certificates] Generated etcd/peer certificate and key. [certificates] etcd/peer serving cert is signed for DNS names [k8s01] and IPs [172.16.50.131] [certificates] Generated etcd/healthcheck-client certificate and key. [certificates] Generated apiserver-etcd-client certificate and key. [certificates] Valid certificates and keys now exist in "/etc/kubernetes/pki" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/admin.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/kubelet.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/controller-manager.conf" [kubeconfig] Wrote KubeConfig file to disk: "/etc/kubernetes/scheduler.conf" [controlplane] Wrote Static Pod manifest for component kube-apiserver to "/etc/kubernetes/manifests/kube-apiserver.yaml" [controlplane] Wrote Static Pod manifest for component kube-controller-manager to "/etc/kubernetes/manifests/kube-controller-manager.yaml" [controlplane] Wrote Static Pod manifest for component kube-scheduler to "/etc/kubernetes/manifests/kube-scheduler.yaml" [etcd] Wrote Static Pod manifest for a local etcd instance to "/etc/kubernetes/manifests/etcd.yaml" [init] Waiting for the kubelet to boot up the control plane as Static Pods from directory "/etc/kubernetes/manifests". [init] This might take a minute or longer if the control plane images have to be pulled. [apiclient] All control plane components are healthy after 25.004724 seconds [uploadconfig] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace [markmaster] Will mark node k8s01 as master by adding a label and a taint [markmaster] Master k8s01 tainted and labelled with key/value: node-role.kubernetes.io/master="" [bootstraptoken] Using token: doz2px.eb7wncizjnwnk63q [bootstraptoken] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstraptoken] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstraptoken] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstraptoken] Creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: kube-dns [addons] Applied essential addon: kube-proxy Your Kubernetes master has initialized successfully! To start using your cluster, you need to run the following as a regular user: mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/ You can now join any number of machines by running the following on each node as root: kubeadm join 172.16.50.131:6443 --token doz2px.eb7wncizjnwnk63q --discovery-token-ca-cert-hash sha256:bc57f885b7be70cb94b457bc4795de3e678058eb05082658ab79629696d1884b
后续操作
mkdir -p $HOME/.kube cp -i /etc/kubernetes/admin.conf $HOME/.kube/config 让master参与工作负载,因为服务器就一台,没用node节点 kubectl taint nodes $HOSTNAME node-role.kubernetes.io/master-
查看pod
kubectl get pods --all-namespaces NAMESPACE NAME READY STATUS RESTARTS AGE kube-system etcd-k8s01 1/1 Running 0 1m kube-system kube-apiserver-k8s01 1/1 Running 0 59s kube-system kube-controller-manager-k8s01 1/1 Running 0 1m kube-system kube-dns-86f4d74b45-psqrb 0/3 Pending 0 1m kube-system kube-proxy-8sgzr 1/1 Running 0 1m kube-system kube-scheduler-k8s01 1/1 Running 0 1m
2.6 部署网络插件flannel
在压缩包yaml目录
cd /data/v1.10.4/yaml # kubectl create -f kube-flannel.yaml clusterrole.rbac.authorization.k8s.io "flannel" created clusterrolebinding.rbac.authorization.k8s.io "flannel" created serviceaccount "flannel" created configmap "kube-flannel-cfg" created daemonset.extensions "kube-flannel-ds" created
待更新....
kubernetes V1.10.4 单节点部署 (手动生成证书)
标签:NPU open 压缩 sans amp sudo nal adc ane
原文地址:http://blog.51cto.com/11889458/2129294