码迷,mamicode.com
首页 > 其他好文 > 详细

kubespray 1.14.3集群证书过期renew后kubelet启动失败

时间:2020-08-29 16:42:35      阅读:81      评论:0      收藏:0      [点我收藏+]

标签:api   off   fallback   sid   etc   man   启动   chain   expected   

Aug 26 09:43:44 k8s-testn2 systemd[1]: kubelet.service failed.
Aug 26 09:43:54 k8s-testn2 systemd[1]: kubelet.service holdoff time over, scheduling restart.
Aug 26 09:43:54 k8s-testn2 systemd[1]: Stopped Kubernetes Kubelet Server.
Aug 26 09:43:54 k8s-testn2 systemd[1]: Starting Kubernetes Kubelet Server...
Aug 26 09:43:54 k8s-testn2 systemd[1]: Started Kubernetes Kubelet Server.
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.371222 18749 server.go:417] Version: v1.18.4
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.371721 18749 plugins.go:100] No cloud provider specified.
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.371751 18749 server.go:838] Client rotation is on, will bootstrap in background
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.386525 18749 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.387646 18749 dynamic_cafile_content.go:167] Starting client-ca-bundle::/etc/kubernetes/pki/ca.crt
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.480616 18749 server.go:647] --cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481062 18749 container_manager_linux.go:266] container manager verified user specified cgroup-root exists: []
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481079 18749 container_manager_linux.go:271] Creating Container Manager object based on Node Config: {RuntimeCgroupsName: SystemCgroupsName: KubeletCgroupsName: ContainerRuntime:docker CgroupsPerQOS:true CgroupRoot:/ CgroupDriver:cgroupfs KubeletRootDir:/var/lib/kubelet ProtectKernelDefaults:false NodeAllocatableConfig:{KubeReservedCgroupName: SystemReservedCgroupName: ReservedSystemCPUs: EnforceNodeAllocatable:map[pods:{}] KubeReserved:map[] SystemReserved:map[] HardEvictionThresholds:[{Signal:nodefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.1} GracePeriod:0s MinReclaim:<nil>} {Signal:nodefs.inodesFree Operator:LessThan Value:{Quantity:<nil> Percentage:0.05} GracePeriod:0s MinReclaim:<nil>} {Signal:imagefs.available Operator:LessThan Value:{Quantity:<nil> Percentage:0.15} GracePeriod:0s MinReclaim:<nil>} {Signal:memory.available Operator:LessThan Value:{Quantity:100Mi Percentage:0} GracePeriod:0s MinReclaim:<nil>}]} QOSReserved:map[] ExperimentalCPUManagerPolicy:none ExperimentalCPUManagerReconcilePeriod:10s ExperimentalPodPidsLimit:-1 EnforceCPULimits:true CPUCFSQuotaPeriod:100ms ExperimentalTopologyManagerPolicy:none}
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481180 18749 topology_manager.go:126] [topologymanager] Creating topology manager with none policy
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481188 18749 container_manager_linux.go:301] [topologymanager] Initializing Topology Manager with none policy
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481193 18749 container_manager_linux.go:306] Creating device plugin manager: true
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481276 18749 client.go:75] Connecting to docker on unix:///var/run/docker.sock
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.481288 18749 client.go:92] Start docker client with request timeout=2m0s
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: W0826 09:43:54.494578 18749 docker_service.go:561] Hairpin mode set to "promiscuous-bridge" but kubenet is not enabled, falling back to "hairpin-veth"
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.494603 18749 docker_service.go:238] Hairpin mode set to "hairpin-veth"
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.521279 18749 docker_service.go:253] Docker cri networking managed by kubernetes.io/no-op
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.534510 18749 docker_service.go:258] Docker Info: &{ID:E2WL:6TPH:MEEW:UH6L:DZKZ:3SL2:5IKV:53YI:GTYJ:GQUJ:JGB6:WMS5 Containers:12 ContainersRunning:10 ContainersPaused:0 ContainersStopped:2 Images:27 Driver:overlay2 DriverStatus:[[Backing Filesystem xfs] [Supports d_type true] [Native Overlay Diff true]] SystemStatus:[] Plugins:{Volume:[local] Network:[bridge host ipvlan macvlan null overlay] Authorization:[] Log:[awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog]} MemoryLimit:true SwapLimit:true KernelMemory:true KernelMemoryTCP:true CPUCfsPeriod:true CPUCfsQuota:true CPUShares:true CPUSet:true PidsLimit:true IPv4Forwarding:true BridgeNfIptables:true BridgeNfIP6tables:true Debug:false NFd:76 OomKillDisable:true NGoroutines:82 SystemTime:2020-08-26T09:43:54.522838067+08:00 LoggingDriver:json-file CgroupDriver:cgroupfs NEventsListener:0 KernelVersion:3.10.0-1127.10.1.el7.x86_64 OperatingSystem:CentOS Linux 7 (Core) OSType:linux Architecture:x86_64 IndexServerAddress:https://index.docker.io/v1/ RegistryConfig:0xc000183490 NCPU:4 MemTotal:8200974336 GenericResources:[] DockerRootDir:/var/lib/docker HTTPProxy: HTTPSProxy: NoProxy: Name:k8s-testn2 Labels:[] ExperimentalBuild:false ServerVersion:19.03.12 ClusterStore: ClusterAdvertise: Runtimes:map[runc:{Path:runc Args:[]}] DefaultRuntime:runc Swarm:{NodeID: NodeAddr: LocalNodeState:inactive ControlAvailable:false Error: RemoteManagers:[] Nodes:0 Managers:0 Cluster:<nil> Warnings:[]} LiveRestoreEnabled:false Isolation: InitBinary:docker-init ContainerdCommit:{ID:7ad184331fa3e55e52b890ea95e65ba581ae3429 Expected:7ad184331fa3e55e52b890ea95e65ba581ae3429} RuncCommit:{ID:dc9208a3303feef5b3839f4323d9beb36df0a9dd Expected:dc9208a3303feef5b3839f4323d9beb36df0a9dd} InitCommit:{ID:fec3683 Expected:fec3683} SecurityOptions:[name=seccomp,profile=default] ProductLicense: Warnings:[]}
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.534634 18749 docker_service.go:271] Setting cgroupDriver to cgroupfs
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550325 18749 remote_runtime.go:59] parsed scheme: ""
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550353 18749 remote_runtime.go:59] scheme "" not registered, fallback to default scheme
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550390 18749 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550406 18749 clientconn.go:933] ClientConn switching balancer to "pick_first"
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550465 18749 remote_image.go:50] parsed scheme: ""
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550473 18749 remote_image.go:50] scheme "" not registered, fallback to default scheme
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550483 18749 passthrough.go:48] ccResolverWrapper: sending update to cc: {[{/var/run/dockershim.sock <nil> 0 <nil>}] <nil> <nil>}
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550489 18749 clientconn.go:933] ClientConn switching balancer to "pick_first"
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550525 18749 kubelet.go:292] Adding pod path: /etc/kubernetes/manifests
Aug 26 09:43:54 k8s-testn2 kubelet[18749]: I0826 09:43:54.550555 18749 kubelet.go:317] Watching apiserver
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:00.567792 18749 aws_credentials.go:77] while getting AWS credentials NoCredentialProviders: no valid providers in chain. Deprecated.
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: For verbose messaging see aws.Config.CredentialsChainVerboseErrors
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.578556 18749 kuberuntime_manager.go:211] Container runtime docker initialized, version: 19.03.12, apiVersion: 1.40.0
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.579010 18749 server.go:1126] Started kubelet
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:00.579168 18749 kubelet.go:1305] Image garbage collection failed once. Stats initialization may not have completed yet: failed to get imageFs info: unable to find data in memory cache
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.580659 18749 fs_resource_analyzer.go:64] Starting FS ResourceAnalyzer
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.580850 18749 server.go:145] Starting to listen on 0.0.0.0:10250
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.582788 18749 volume_manager.go:265] Starting Kubelet Volume Manager
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.587907 18749 server.go:393] Adding debug handlers to kubelet server.
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.593435 18749 desired_state_of_world_populator.go:139] Desired state populator starts to run
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:00.598626 18749 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list *v1.CSIDriver: the server could not find the requested resource
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.616392 18749 status_manager.go:158] Starting to sync pod status with apiserver
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.616433 18749 kubelet.go:1821] Starting kubelet main sync loop.
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:00.616474 18749 kubelet.go:1845] skipping pod synchronization - [container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: W0826 09:44:00.639281 18749 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn‘t find network status for kube-system/local-volume-provisioner-7kwt5 through plugin: invalid network status for
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.682854 18749 kubelet_node_status.go:294] Setting node annotation to enable volume controller attach/detach
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: E0826 09:44:00.717432 18749 kubelet.go:1845] skipping pod synchronization - container runtime status check may not have completed yet
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.734979 18749 kubelet_node_status.go:70] Attempting to register node k8s-testn2
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793184 18749 cpu_manager.go:184] [cpumanager] starting with none policy
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793204 18749 cpu_manager.go:185] [cpumanager] reconciling every 10s
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793237 18749 state_mem.go:36] [cpumanager] initializing new in-memory state store
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793579 18749 state_mem.go:88] [cpumanager] updated default cpuset: ""
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793593 18749 state_mem.go:96] [cpumanager] updated cpuset assignments: "map[]"
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.793611 18749 policy_none.go:43] [cpumanager] none policy: Start
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.795922 18749 plugin_manager.go:114] Starting Kubelet Plugin Manager
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.917859 18749 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.919267 18749 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.920352 18749 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.921848 18749 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.923316 18749 topology_manager.go:233] [topologymanager] Topology Admit Handler
Aug 26 09:44:00 k8s-testn2 kubelet[18749]: I0826 09:44:00.997672 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/1512623d-9a38-11ea-95cb-525400c76348-kube-proxy") pod "kube-proxy-vqlh2" (UID: "1512623d-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.097893 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/6ad69eac-9a38-11ea-95cb-525400c76348-xtables-lock") pod "nodelocaldns-tdldf" (UID: "6ad69eac-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.097929 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "sys" (UniqueName: "kubernetes.io/host-path/d847ce13-9a38-11ea-95cb-525400c76348-sys") pod "node-exporter-bj6b4" (UID: "d847ce13-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.097951 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "host-cni-bin" (UniqueName: "kubernetes.io/host-path/ec73a9b7-9a38-11ea-95cb-525400c76348-host-cni-bin") pod "kube-flannel-rq79q" (UID: "ec73a9b7-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.097978 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-volume-provisioner" (UniqueName: "kubernetes.io/configmap/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner") pod "local-volume-provisioner-7kwt5" (UID: "1e531955-9a39-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098014 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-volume-provisioner-hostpath-local-storage" (UniqueName: "kubernetes.io/host-path/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner-hostpath-local-storage") pod "local-volume-provisioner-7kwt5" (UID: "1e531955-9a39-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098036 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "kube-proxy-token-nx4mq" (UniqueName: "kubernetes.io/secret/1512623d-9a38-11ea-95cb-525400c76348-kube-proxy-token-nx4mq") pod "kube-proxy-vqlh2" (UID: "1512623d-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098054 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "config-volume" (UniqueName: "kubernetes.io/configmap/6ad69eac-9a38-11ea-95cb-525400c76348-config-volume") pod "nodelocaldns-tdldf" (UID: "6ad69eac-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098071 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "cni" (UniqueName: "kubernetes.io/host-path/ec73a9b7-9a38-11ea-95cb-525400c76348-cni") pod "kube-flannel-rq79q" (UID: "ec73a9b7-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098092 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-token-q9pgj" (UniqueName: "kubernetes.io/secret/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-token-q9pgj") pod "kube-flannel-rq79q" (UID: "ec73a9b7-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098111 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "nodelocaldns-token-cmjsd" (UniqueName: "kubernetes.io/secret/6ad69eac-9a38-11ea-95cb-525400c76348-nodelocaldns-token-cmjsd") pod "nodelocaldns-tdldf" (UID: "6ad69eac-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098128 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "root" (UniqueName: "kubernetes.io/host-path/d847ce13-9a38-11ea-95cb-525400c76348-root") pod "node-exporter-bj6b4" (UID: "d847ce13-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098147 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "node-exporter-token-2wqvq" (UniqueName: "kubernetes.io/secret/d847ce13-9a38-11ea-95cb-525400c76348-node-exporter-token-2wqvq") pod "node-exporter-bj6b4" (UID: "d847ce13-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098164 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "run" (UniqueName: "kubernetes.io/host-path/ec73a9b7-9a38-11ea-95cb-525400c76348-run") pod "kube-flannel-rq79q" (UID: "ec73a9b7-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098182 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "xtables-lock" (UniqueName: "kubernetes.io/host-path/1512623d-9a38-11ea-95cb-525400c76348-xtables-lock") pod "kube-proxy-vqlh2" (UID: "1512623d-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098201 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "lib-modules" (UniqueName: "kubernetes.io/host-path/1512623d-9a38-11ea-95cb-525400c76348-lib-modules") pod "kube-proxy-vqlh2" (UID: "1512623d-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098222 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "flannel-cfg" (UniqueName: "kubernetes.io/configmap/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-cfg") pod "kube-flannel-rq79q" (UID: "ec73a9b7-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098254 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "local-volume-provisioner-token-k94qb" (UniqueName: "kubernetes.io/secret/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner-token-k94qb") pod "local-volume-provisioner-7kwt5" (UID: "1e531955-9a39-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098300 18749 reconciler.go:224] operationExecutor.VerifyControllerAttachedVolume started for volume "proc" (UniqueName: "kubernetes.io/host-path/d847ce13-9a38-11ea-95cb-525400c76348-proc") pod "node-exporter-bj6b4" (UID: "d847ce13-9a38-11ea-95cb-525400c76348")
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.098312 18749 reconciler.go:157] Reconciler: start to sync state
Aug 26 09:44:01 k8s-testn2 kubelet[18749]: I0826 09:44:01.979564 18749 request.go:621] Throttling request took 1.0598077s, request: GET:https://10.11.37.61:6443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%3Dnodelocaldns&limit=500&resourceVersion=0
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.199974 18749 configmap.go:200] Couldn‘t get configMap kube-system/local-volume-provisioner: failed to sync configmap cache: timed out waiting for the condition
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200110 18749 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner podName:1e531955-9a39-11ea-95cb-525400c76348 nodeName:}" failed. No retries permitted until 2020-08-26 09:44:02.700058846 +0800 CST m=+8.398976615 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"local-volume-provisioner\" (UniqueName: \"kubernetes.io/configmap/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner\") pod \"local-volume-provisioner-7kwt5\" (UID: \"1e531955-9a39-11ea-95cb-525400c76348\") : failed to sync configmap cache: timed out waiting for the condition"
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200440 18749 configmap.go:200] Couldn‘t get configMap kube-system/kube-flannel-cfg: failed to sync configmap cache: timed out waiting for the condition
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200518 18749 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/configmap/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-cfg podName:ec73a9b7-9a38-11ea-95cb-525400c76348 nodeName:}" failed. No retries permitted until 2020-08-26 09:44:02.700491755 +0800 CST m=+8.399409526 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"flannel-cfg\" (UniqueName: \"kubernetes.io/configmap/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-cfg\") pod \"kube-flannel-rq79q\" (UID: \"ec73a9b7-9a38-11ea-95cb-525400c76348\") : failed to sync configmap cache: timed out waiting for the condition"
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200551 18749 secret.go:195] Couldn‘t get secret kube-system/local-volume-provisioner-token-k94qb: failed to sync secret cache: timed out waiting for the condition
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200619 18749 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner-token-k94qb podName:1e531955-9a39-11ea-95cb-525400c76348 nodeName:}" failed. No retries permitted until 2020-08-26 09:44:02.700591078 +0800 CST m=+8.399508880 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"local-volume-provisioner-token-k94qb\" (UniqueName: \"kubernetes.io/secret/1e531955-9a39-11ea-95cb-525400c76348-local-volume-provisioner-token-k94qb\") pod \"local-volume-provisioner-7kwt5\" (UID: \"1e531955-9a39-11ea-95cb-525400c76348\") : failed to sync secret cache: timed out waiting for the condition"
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200643 18749 secret.go:195] Couldn‘t get secret kube-system/flannel-token-q9pgj: failed to sync secret cache: timed out waiting for the condition
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200694 18749 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-token-q9pgj podName:ec73a9b7-9a38-11ea-95cb-525400c76348 nodeName:}" failed. No retries permitted until 2020-08-26 09:44:02.700669926 +0800 CST m=+8.399587698 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"flannel-token-q9pgj\" (UniqueName: \"kubernetes.io/secret/ec73a9b7-9a38-11ea-95cb-525400c76348-flannel-token-q9pgj\") pod \"kube-flannel-rq79q\" (UID: \"ec73a9b7-9a38-11ea-95cb-525400c76348\") : failed to sync secret cache: timed out waiting for the condition"
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200717 18749 secret.go:195] Couldn‘t get secret kubesphere-monitoring-system/node-exporter-token-2wqvq: failed to sync secret cache: timed out waiting for the condition
Aug 26 09:44:02 k8s-testn2 kubelet[18749]: E0826 09:44:02.200790 18749 nestedpendingoperations.go:301] Operation for "{volumeName:kubernetes.io/secret/d847ce13-9a38-11ea-95cb-525400c76348-node-exporter-token-2wqvq podName:d847ce13-9a38-11ea-95cb-525400c76348 nodeName:}" failed. No retries permitted until 2020-08-26 09:44:02.700765169 +0800 CST m=+8.399682940 (durationBeforeRetry 500ms). Error: "MountVolume.SetUp failed for volume \"node-exporter-token-2wqvq\" (UniqueName: \"kubernetes.io/secret/d847ce13-9a38-11ea-95cb-525400c76348-node-exporter-token-2wqvq\") pod \"node-exporter-bj6b4\" (UID: \"d847ce13-9a38-11ea-95cb-525400c76348\") : failed to sync secret cache: timed out waiting for the condition"
Aug 26 09:44:03 k8s-testn2 kubelet[18749]: I0826 09:44:03.023888 18749 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ef42d1e9d35dad7cf3d609668c944b130c70cc31c794d752c4144bc862e1d15e
Aug 26 09:44:03 k8s-testn2 kubelet[18749]: I0826 09:44:03.583725 18749 kubelet_node_status.go:112] Node k8s-testn2 was previously registered
Aug 26 09:44:03 k8s-testn2 kubelet[18749]: I0826 09:44:03.583840 18749 kubelet_node_status.go:73] Successfully registered node k8s-testn2
Aug 26 09:44:03 k8s-testn2 kubelet[18749]: I0826 09:44:03.624431 18749 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71d52d1f329a093302b80820c5deffa9f7f5c6685c28bf260457e26cb12c0a80
Aug 26 09:44:03 k8s-testn2 kubelet[18749]: W0826 09:44:03.699283 18749 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn‘t find network status for kube-system/local-volume-provisioner-7kwt5 through plugin: invalid network status for
Aug 26 09:44:03 k8s-testn2 kubelet[18749]: E0826 09:44:03.981146 18749 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSIDriver: the server could not find the requested resource
Aug 26 09:44:04 k8s-testn2 kubelet[18749]: W0826 09:44:04.764535 18749 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn‘t find network status for kube-system/local-volume-provisioner-7kwt5 through plugin: invalid network status for
Aug 26 09:44:04 k8s-testn2 kubelet[18749]: E0826 09:44:04.780818 18749 csi_plugin.go:277] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Aug 26 09:44:06 k8s-testn2 kubelet[18749]: E0826 09:44:06.780725 18749 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list
v1.CSIDriver: the server could not find the requested resource
Aug 26 09:44:09 k8s-testn2 kubelet[18749]: E0826 09:44:09.980945 18749 csi_plugin.go:277] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Aug 26 09:44:11 k8s-testn2 kubelet[18749]: E0826 09:44:11.180978 18749 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list v1.CSIDriver: the server could not find the requested resource
Aug 26 09:44:14 k8s-testn2 kubelet[18749]: E0826 09:44:14.380959 18749 csi_plugin.go:277] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Aug 26 09:44:16 k8s-testn2 kubelet[18749]: E0826 09:44:16.981057 18749 csi_plugin.go:277] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Aug 26 09:44:18 k8s-testn2 kubelet[18749]: E0826 09:44:18.969833 18749 reflector.go:178] k8s.io/client-go/informers/factory.go:135: Failed to list
v1.CSIDriver: the server could not find the requested resource
Aug 26 09:44:21 k8s-testn2 kubelet[18749]: E0826 09:44:21.937083 18749 csi_plugin.go:277] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Aug 26 09:44:33 k8s-testn2 kubelet[18749]: I0826 09:44:33.941619 18749 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: ef42d1e9d35dad7cf3d609668c944b130c70cc31c794d752c4144bc862e1d15e
Aug 26 09:44:33 k8s-testn2 kubelet[18749]: I0826 09:44:33.942071 18749 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 99ec916d3d03e1637742d8fac962d63a7f73c6493f1587796ee8b45c1ce5512e
Aug 26 09:44:33 k8s-testn2 kubelet[18749]: E0826 09:44:33.943175 18749 pod_workers.go:191] Error syncing pod ec73a9b7-9a38-11ea-95cb-525400c76348 ("kube-flannel-rq79q_kube-system(ec73a9b7-9a38-11ea-95cb-525400c76348)"), skipping: failed to "StartContainer" for "kube-flannel" with CrashLoopBackOff: "back-off 10s restarting failed container=kube-flannel pod=kube-flannel-rq79q_kube-system(ec73a9b7-9a38-11ea-95cb-525400c76348)"
Aug 26 09:44:34 k8s-testn2 kubelet[18749]: W0826 09:44:34.948762 18749 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn‘t find network status for kube-system/local-volume-provisioner-7kwt5 through plugin: invalid network status for
Aug 26 09:44:34 k8s-testn2 kubelet[18749]: I0826 09:44:34.954769 18749 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 71d52d1f329a093302b80820c5deffa9f7f5c6685c28bf260457e26cb12c0a80
Aug 26 09:44:34 k8s-testn2 kubelet[18749]: I0826 09:44:34.955191 18749 topology_manager.go:219] [topologymanager] RemoveContainer - Container ID: 99f4d3118b09754f2523c80f7148bc1b0fb72152d43ba820344511de2937658b
Aug 26 09:44:34 k8s-testn2 kubelet[18749]: E0826 09:44:34.955730 18749 pod_workers.go:191] Error syncing pod 1e531955-9a39-11ea-95cb-525400c76348 ("local-volume-provisioner-7kwt5_kube-system(1e531955-9a39-11ea-95cb-525400c76348)"), skipping: failed to "StartContainer" for "provisioner" with CrashLoopBackOff: "back-off 10s restarting failed container=provisioner pod=local-volume-provisioner-7kwt5_kube-system(1e531955-9a39-11ea-95cb-525400c76348)"
Aug 26 09:44:35 k8s-testn2 kubelet[18749]: W0826 09:44:35.971212 18749 docker_sandbox.go:400] failed to read pod IP from plugin/docker: Couldn‘t find network status for kube-system/local-volume-provisioner-7kwt5 through plugin: invalid network status for
Aug 26 09:44:43 k8s-testn2 kubelet[18749]: E0826 09:44:43.510270 18749 csi_plugin.go:277] Failed to initialize CSINode: error updating CSINode annotation: timed out waiting for the condition; caused by: the server could not find the requested resource
Aug 26 09:44:43 k8s-testn2 kubelet[18749]: F0826 09:44:43.510289 18749 csi_plugin.go:291] Failed to initialize CSINode after retrying: timed out waiting for the condition
Aug 26 09:44:43 k8s-testn2 systemd[1]: kubelet.service: main process exited, code=exited, status=255/n/a
Aug 26 09:44:43 k8s-testn2 systemd[1]: Unit kubelet.service entered failed state.
Aug 26 09:44:43 k8s-testn2 systemd[1]: kubelet.service failed.

kubespray 1.14.3集群证书过期renew后kubelet启动失败

标签:api   off   fallback   sid   etc   man   启动   chain   expected   

原文地址:https://blog.51cto.com/docker/2524175

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!