标签:default nav 核心 count bre 指定 声明 pod 匹配
除了让 kubernetes 集群调度器自动为 pod 资源选择某个节点(默认调度考虑的是资源足够,并且 load 尽量平均),有些情况我们希望能更多地控制 pod 应该如何调度。比如,集群中有些机器的配置更好( SSD,更好的内存等),我们希望比较核心的服务(比如说数据库)运行在上面;或者某两个服务的网络传输很频繁,我们希望它们最好在同一台机器上,或者同一个机房。有一些特殊的应用,我们就希望它们能运行在我们指定的节点上,还有些情况下,我们希望某个应用在所有的节点上都能运行一份。
针对不同的应用场景,kubernetes内置了多种调度方式可供选择。包括标签选择器,daemonsets,节点亲和性,pod亲和性,污点与容忍等。
这种方式其实就是我们最常用的使用label的方式,给某一个node打上特定的标签,然后在启动pod的时候,通过nodeSelector指定要调度到的node节点的标签。
[root@docker-server1 ingress]# kubectl get nodes
NAME STATUS ROLES AGE VERSION 192.168.132.131 Ready master 3d18h v1.17.0 192.168.132.132 Ready <none> 3d18h v1.17.0 192.168.132.133 Ready <none> 3d18h v1.17.0
root@docker-server1 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-5c6985f9cc-wkngv 1/1 Running 0 22h 192.168.132.132 192.168.132.132 <none> <none>
[root@docker-server1 ingress]# kubectl get nodes --show-labels
NAME STATUS ROLES AGE VERSION LABELS 192.168.132.131 Ready master 3d18h v1.17.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.132.131,kubernetes.io/os=linux,node-role.kubernetes.io/master= 192.168.132.132 Ready <none> 3d18h v1.17.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.132.132,kubernetes.io/os=linux 192.168.132.133 Ready <none> 3d18h v1.17.0 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=192.168.132.133,kubernetes.io/os=linux
默认为节点打的标签:系统版本,节点名称,系统架构
[root@docker-server1 ingress]# vi nginx-controller.yaml
nodeSelector:
kubernetes.io/os: linux
这个标签对于目前的条件,没有任何意义
nodeSelector: kubernetes.io/os: linux kubernetes.io/hostname: 192.168.132.133 #这是一个危矣标签
标签是可以是多个,而且是逻辑与的关系,即当所有的标签都满足,才能成立,如果都不满足,就会pengding
[root@docker-server1 ingress]# kubectl apply -f nginx-controller.yaml
namespace/ingress-nginx unchanged configmap/nginx-configuration unchanged configmap/tcp-services unchanged configmap/udp-services unchanged serviceaccount/nginx-ingress-serviceaccount unchanged clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole unchanged role.rbac.authorization.k8s.io/nginx-ingress-role unchanged rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding unchanged clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding unchanged deployment.apps/nginx-ingress-controller configured limitrange/ingress-nginx configured
[root@docker-server1 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-5c6985f9cc-wkngv 1/1 Running 0 22h 192.168.132.132 192.168.132.132 <none> <none> nginx-ingress-controller-5cffd956bf-dm9qf 0/1 ContainerCreating 0 4s 192.168.132.133 192.168.132.133 <none> <none>
[root@docker-server1 ingress]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-5cffd956bf-dm9qf 1/1 Running 0 4m3s 192.168.132.133 192.168.132.133 <none> <none>
然后就运行在了192.168.132.133上
apiVersion: apps/v1 kind: Deployment metadata: name: busybox namespace: default spec: replicas: 1 selector: matchLabels: name: busybox strategy: type: RollingUpdate rollingUpdate: maxSurge: 10% maxUnavailable: 0 template: metadata: labels: name: busybox spec: nodeSelector: aaa: bbb containers: - name: busybox image: busybox command: - /bin/sh - -c - "sleep 3600"
[root@docker-server1 deployment]# kubectl apply -f busybox-deployment.yaml
deployment.apps/busybox configured
[root@docker-server1 deployment]# kubectl get pods
busybox-546555c84-2psbb 1/1 Running 13 24h busybox-674bd96f74-m4fst 0/1 Pending 0 13s
[root@docker-server1 deployment]# kubectl describe pods busybox-674bd96f74-m4fst
Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn‘t match node selector. Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn‘t match node selector.
一直pengding状态,是因为没有匹配的标签
[root@docker-server1 deployment]# kubectl label node 192.168.132.132 aaa=bbb
node/192.168.132.132 labeled
[root@docker-server1 deployment]# kubectl get pods
NAME READY STATUS RESTARTS AGE busybox-546555c84-2psbb 1/1 Running 13 24h busybox-674bd96f74-m4fst 0/1 ContainerCreating 0 7m26s
[root@docker-server1 deployment]# kubectl describe pods busybox-674bd96f74-m4fst
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn‘t match node selector. Warning FailedScheduling <unknown> default-scheduler 0/3 nodes are available: 3 node(s) didn‘t match node selector. Normal Scheduled <unknown> default-scheduler Successfully assigned default/busybox-674bd96f74-m4fst to 192.168.132.132 Normal Pulling 38s kubelet, 192.168.132.132 Pulling image "busybox" Normal Pulled 34s kubelet, 192.168.132.132 Successfully pulled image "busybox" Normal Created 34s kubelet, 192.168.132.132 Created container busybox Normal Started 33s kubelet, 192.168.132.132 Started container busybox
[root@docker-server1 deployment]# kubectl label node 192.168.132.132 aaa-
删除标签后,已经运行的pod会继续运行
[root@docker-server1 deployment]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox-674bd96f74-m4fst 1/1 Running 0 12m 10.244.1.29 192.168.132.132 <none> <none> goproxy 1/1 Running 1 3d11h 10.244.1.21 192.168.132.132 <none> <none>
[root@docker-server1 deployment]# kubectl delete pods busybox-674bd96f74-m4fst
pod "busybox-674bd96f74-m4fst" deleted
root@docker-server1 deployment]# kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES busybox-674bd96f74-8d7ml 0/1 Pending 0 65s <none> <none> <none> <none>
一直不能启动,是因为没有匹配的标签
为192.168.132.132 打自定义标签
[root@docker-server1 deployment]# kubectl label node 192.168.132.132 ingress=enable
node/192.168.132.132 labeled
[root@docker-server1 deployment]# vim /yamls/ingress/nginx-controller.yaml
nodeSelector:
ingress: enable
[root@docker-server1 deployment]# kubectl apply -f /yamls/ingress/nginx-controller.yaml
namespace/ingress-nginx unchanged configmap/nginx-configuration unchanged configmap/tcp-services unchanged configmap/udp-services unchanged serviceaccount/nginx-ingress-serviceaccount unchanged clusterrole.rbac.authorization.k8s.io/nginx-ingress-clusterrole unchanged role.rbac.authorization.k8s.io/nginx-ingress-role unchanged rolebinding.rbac.authorization.k8s.io/nginx-ingress-role-nisa-binding unchanged clusterrolebinding.rbac.authorization.k8s.io/nginx-ingress-clusterrole-nisa-binding unchanged deployment.apps/nginx-ingress-controller configured limitrange/ingress-nginx configured
[root@docker-server1 deployment]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-5cffd956bf-dm9qf 1/1 Terminating 0 32m 192.168.132.133 192.168.132.133 <none> <none> nginx-ingress-controller-79669b846b-nlrxl 1/1 Running 0 16s 192.168.132.132 192.168.132.132 <none> <none>
[root@docker-server1 deployment]# kubectl get pods -n ingress-nginx -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES nginx-ingress-controller-79669b846b-nlrxl 1/1 Running 0 95s 192.168.132.132 192.168.132.132 <none> <none>
另一种,是直接使用nodeName:指定节点IP
[root@docker-server1 deployment]# vim /yamls/ingress/nginx-controller.yaml
hostNetwork: true nodeName: "192.168.132.133"
[root@docker-server1 deployment]# kubectl get nodes
NAME STATUS ROLES AGE VERSION 192.168.132.131 Ready master 3d19h v1.17.0 192.168.132.132 Ready <none> 3d19h v1.17.0 192.168.132.133 Ready <none> 3d19h v1.17.0
尽量不要指定nodeName,因为当这个节点断掉之后,必须修改新的nodeName,但是使用标签选择器,直接打标签就可以了
可以这么操作,运行节点的副本设为2,然后给其中两个节点打上相同的标签,在使用标签选择器选择标签,这样就可以让ingress运行在固定的节点上
博主声明:本文的内容来源主要来自誉天教育晏威老师,由本人实验完成操作验证,需要的博友请联系誉天教育(http://www.yutianedu.com/),获得官方同意或者晏老师(https://www.cnblogs.com/breezey/)本人同意即可转载,谢谢!
标签:default nav 核心 count bre 指定 声明 pod 匹配
原文地址:https://www.cnblogs.com/zyxnhr/p/12189639.html