标签:重新编译 int date lin mon dns spec specified api
作者:李毓k8s的adm安装方式有一个巨坑,就是证书过期问题。其中涉及到的证书有apiserver,kubelet,etcd,proxy等等证书。这个问题在二进制安装方式是不存在的,因为可以手动更改证书。但是由于adm是自动安装,所以需要后期处理。
目前的解决方式一般有三种,第一种是集群升级,通过升级k8s,间接的把证书也升级了。第二种是修改源代码,也就是对kubeadm重新编译。第三种就是重新生成证书。
由于k8s1.18.8版本刚刚更新了。所以我想趁着这次机会演绎一下集群升级和证书更新的一石二鸟操作。
三台机器,一台master,二台server。版本是1.18.6
[root@adm-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
adm-master Ready master 37d v1.18.6
adm-node1 Ready <none> 37d v1.18.6
adm-node2 Ready <none> 37d v1.18.6
查看一下证书的有效期
[root@adm-master ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ‘ Not ‘
Not Before: Aug 1 13:41:05 2020 GMT
Not After : Aug 1 13:41:05 2021 GMT
我们先来看一下kubeadm的版本
[root@adm-master ~]# kubeadm version
kubeadm version: &version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.6", GitCommit:"dff82dc0de47299ab66c83c626e08b245ab19037", GitTreeState:"clean", BuildDate:"2020-07-15T16:56:34Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
可以看见,和集群的版本是相同的。
通过kbueadm upgrade plan 方式查看升级计划
[root@adm-master ~]# kubeadm upgrade plan
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade] Fetching available versions to upgrade to
[upgrade/versions] Cluster version: v1.18.0
[upgrade/versions] kubeadm version: v1.18.6
I0907 23:36:36.404303 26584 version.go:252] remote version is much newer: v1.19.0; falling back to: stable-1.18
[upgrade/versions] Latest stable version: v1.18.8
[upgrade/versions] Latest stable version: v1.18.8
W0907 23:36:52.285681 26584 version.go:102] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.18.txt": Get https://storage.googleapis.com/kubernetes-release/release/stable-1.18.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
W0907 23:36:52.285711 26584 version.go:103] falling back to the local client version: v1.18.6
[upgrade/versions] Latest version in the v1.18 series: v1.18.6
[upgrade/versions] Latest version in the v1.18 series: v1.18.6
Upgrade to the latest version in the v1.18 series:
COMPONENT CURRENT AVAILABLE
API Server v1.18.0 v1.18.6
Controller Manager v1.18.0 v1.18.6
Scheduler v1.18.0 v1.18.6
Kube Proxy v1.18.0 v1.18.6
CoreDNS 1.6.7 1.6.7
Etcd 3.4.3 3.4.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.18.6
Components that must be upgraded manually after you have upgraded the control plane with ‘kubeadm upgrade apply‘:
COMPONENT CURRENT AVAILABLE
Kubelet 3 x v1.18.6 v1.18.8
Upgrade to the latest stable version:
COMPONENT CURRENT AVAILABLE
API Server v1.18.0 v1.18.8
Controller Manager v1.18.0 v1.18.8
Scheduler v1.18.0 v1.18.8
Kube Proxy v1.18.0 v1.18.8
CoreDNS 1.6.7 1.6.7
Etcd 3.4.3 3.4.3-0
You can now apply the upgrade by executing the following command:
kubeadm upgrade apply v1.18.8
Note: Before you can perform this upgrade, you have to update kubeadm to v1.18.8.
开始升级
[root@adm-master ~]# kubeadm upgrade apply v1.18.8
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.8"
[upgrade/versions] Cluster version: v1.18.0
[upgrade/versions] kubeadm version: v1.18.6
[upgrade/version] FATAL: the --version argument is invalid due to these errors:
- Specified version to upgrade to "v1.18.8" is higher than the kubeadm version "v1.18.6". Upgrade kubeadm first using the tool you used to install kubeadm
Can be bypassed if you pass the --force flag
To see the stack trace of this error execute with --v=5 or higher
[root@adm-master ~]# kubeadm upgrade apply v1.18.8 --force
[upgrade/config] Making sure the configuration is correct:
[upgrade/config] Reading configuration from the cluster...
[upgrade/config] FYI: You can look at this config file with ‘kubectl -n kube-system get cm kubeadm-config -oyaml‘
[preflight] Running pre-flight checks.
[upgrade] Running cluster health checks
[upgrade/version] You have chosen to change the cluster version to "v1.18.8"
[upgrade/versions] Cluster version: v1.18.0
[upgrade/versions] kubeadm version: v1.18.6
[upgrade/version] Found 1 potential version compatibility errors but skipping since the --force flag is set:
- Specified version to upgrade to "v1.18.8" is higher than the kubeadm version "v1.18.6". Upgrade kubeadm first using the tool you used to install kubeadm
[upgrade/prepull] Will prepull images for components [kube-apiserver kube-controller-manager kube-scheduler etcd]
[upgrade/prepull] Prepulling image for component etcd.
[upgrade/prepull] Prepulling image for component kube-apiserver.
[upgrade/prepull] Prepulling image for component kube-controller-manager.
[upgrade/prepull] Prepulling image for component kube-scheduler.
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-controller-manager
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 0 Pods for label selector k8s-app=upgrade-prepull-etcd
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-apiserver
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-kube-scheduler
[apiclient] Found 1 Pods for label selector k8s-app=upgrade-prepull-etcd
[upgrade/prepull] Prepulled image for component etcd.
[upgrade/prepull] Prepulled image for component kube-apiserver.
[upgrade/prepull] Prepulled image for component kube-controller-manager.
[upgrade/prepull] Prepulled image for component kube-scheduler.
[upgrade/prepull] Successfully prepulled the images for all the control plane components
[upgrade/apply] Upgrading your Static Pod-hosted control plane to version "v1.18.8"...
Static pod: kube-apiserver-adm-master hash: 6861528d68248c9d1178280d3594d8db
Static pod: kube-controller-manager-adm-master hash: 871d07b6ec226107d162a636bad7f0aa
Static pod: kube-scheduler-adm-master hash: 35d86d4a27d4ed2186b3ab641e946a02
[upgrade/etcd] Upgrading to TLS for etcd
[upgrade/etcd] Non fatal issue encountered during upgrade: the desired etcd version for this Kubernetes version "v1.18.8" is "3.4.3-0", but the current etcd version is "3.4.3". Won‘t downgrade etcd, instead just continue
[upgrade/staticpods] Writing new Static Pod manifests to "/etc/kubernetes/tmp/kubeadm-upgraded-manifests642601366"
W0907 23:42:33.736477 28017 manifests.go:225] the default kube-apiserver authorization-mode is "Node,RBAC"; using "Node,RBAC"
[upgrade/staticpods] Preparing for "kube-apiserver" upgrade
[upgrade/staticpods] Renewing apiserver certificate
[upgrade/staticpods] Renewing apiserver-kubelet-client certificate
[upgrade/staticpods] Renewing front-proxy-client certificate
[upgrade/staticpods] Renewing apiserver-etcd-client certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-apiserver.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-07-23-42-25/kube-apiserver.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-apiserver-adm-master hash: 6861528d68248c9d1178280d3594d8db
Static pod: kube-apiserver-adm-master hash: 253cf8d5be40058a076cd11584613b96
[apiclient] Found 1 Pods for label selector component=kube-apiserver
[upgrade/staticpods] Component "kube-apiserver" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-controller-manager" upgrade
[upgrade/staticpods] Renewing controller-manager.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-controller-manager.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-07-23-42-25/kube-controller-manager.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-controller-manager-adm-master hash: 871d07b6ec226107d162a636bad7f0aa
Static pod: kube-controller-manager-adm-master hash: 628316be9d303769769e096bfd3537e4
[apiclient] Found 1 Pods for label selector component=kube-controller-manager
[upgrade/staticpods] Component "kube-controller-manager" upgraded successfully!
[upgrade/staticpods] Preparing for "kube-scheduler" upgrade
[upgrade/staticpods] Renewing scheduler.conf certificate
[upgrade/staticpods] Moved new manifest to "/etc/kubernetes/manifests/kube-scheduler.yaml" and backed up old manifest to "/etc/kubernetes/tmp/kubeadm-backup-manifests-2020-09-07-23-42-25/kube-scheduler.yaml"
[upgrade/staticpods] Waiting for the kubelet to restart the component
[upgrade/staticpods] This might take a minute or longer depending on the component/version gap (timeout 5m0s)
Static pod: kube-scheduler-adm-master hash: 35d86d4a27d4ed2186b3ab641e946a02
Static pod: kube-scheduler-adm-master hash: 7363eeef53899d60c792412f58124026
[apiclient] Found 1 Pods for label selector component=kube-scheduler
[upgrade/staticpods] Component "kube-scheduler" upgraded successfully!
[upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.18" in namespace kube-system with the configuration for the kubelets in the cluster
[kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.18" ConfigMap in the kube-system namespace
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to get nodes
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
[upgrade/successful] SUCCESS! Your cluster was upgraded to "v1.18.8". Enjoy!
[upgrade/kubelet] Now that your control plane is upgraded, please proceed with upgrading your kubelets if you haven‘t already done so.
查看镜像版本和集群版本
[root@adm-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
adm-master Ready master 37d v1.18.6
adm-node1 Ready <none> 37d v1.18.6
adm-node2 Ready <none> 37d v1.18.6
[root@adm-master ~]# docker images
REPOSITORY TAG IMAGE ID CREATED SIZE
registry.aliyuncs.com/google_containers/kube-proxy v1.18.8 0fb7201f92d0 3 weeks ago 117MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.8 6a979351fe5e 3 weeks ago 162MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.18.8 92d040a0dca7 3 weeks ago 173MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.18.8 6f7135fb47e0 3 weeks ago 95.3MB
registry.aliyuncs.com/google_containers/kube-proxy v1.18.0 43940c34f24f 5 months ago 117MB
registry.aliyuncs.com/google_containers/kube-controller-manager v1.18.0 d3e55153f52f 5 months ago 162MB
registry.aliyuncs.com/google_containers/kube-apiserver v1.18.0 74060cea7f70 5 months ago 173MB
registry.aliyuncs.com/google_containers/kube-scheduler v1.18.0 a31f78c7c8ce 5 months ago 95.3MB
registry.cn-shenzhen.aliyuncs.com/carp/flannel v0.11 f60e29a33f27 6 months ago 52.6MB
registry.aliyuncs.com/google_containers/pause 3.2 80d28bedfe5d 6 months ago 683kB
registry.aliyuncs.com/google_containers/coredns 1.6.7 67da37a9a360 7 months ago 43.8MB
registry.aliyuncs.com/google_containers/etcd 3.4.3-0 303ce5db0e90 10 months ago 288MB
发现没有升级,因为kubectl和kubelet也需要升级。
[root@adm-master ~]# yum install -y kubelet-1.18.8-0 kubeadm-1.18.8-0 kubectl-1.18.8-0
systemctl daemon-reload
systemctl restart kubelet
[root@adm-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
adm-master Ready master 37d v1.18.8
adm-node1 Ready <none> 37d v1.18.6
adm-node2 Ready <none> 37d v1.18.6
maser已经更新了,node还没有更新,去node执行一下.
yum install -y kubelet-1.18.8-0
systemctl daemon-reload
systemctl restart kubelet
再查看一下
[root@adm-master ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
adm-master Ready master 37d v1.18.8
adm-node1 Ready <none> 37d v1.18.8
adm-node2 Ready <none> 37d v1.18.8
集群升级成功
顺便查看一下证书日期
[root@adm-master ~]# openssl x509 -in /etc/kubernetes/pki/apiserver.crt -noout -text |grep ‘ Not ‘
Not Before: Aug 1 13:41:05 2020 GMT
Not After : Sep 7 15:42:34 2021 GMT
到期日已经变了。
这样一石二鸟计划就顺利完成咯。
标签:重新编译 int date lin mon dns spec specified api
原文地址:https://blog.51cto.com/14783669/2529969