码迷,mamicode.com
首页 > 其他好文 > 详细

k8s监控

时间:2019-03-08 09:31:44      阅读:192      评论:0      收藏:0      [点我收藏+]

标签:config   cad   模板文件   ota   数据推送   replicas   setup   point   原理   

注意时间同步
#(1)工作原理
node-exporter组件负责收集节点上的metrics监控数据,并将数据推送给prometheus, prometheus负责存储这些数据,grafana将这些数据通过网页以图形的形式展现给用户。
注意: 需要提前部署好metric server

#(2)部署node-exporter组件

1)node-exporter.yaml文件
#cat node-exporter.yaml 
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
    name: node-exporter
    namespace: kube-system
    labels:
        k8s-app: node-exporter
spec:
    template:
        metadata:
            labels:
                k8s-app: node-exporter
        spec:
            containers:
            - image: registry.cn-hangzhou.aliyuncs.com/wangfang-k8s/node-exporter:latest
                name: node-exporter
                ports:
                - containerPort: 9100
                    protocol: TCP
                    name: http
---
apiVersion: v1
kind: Service
metadata:
    labels:
        k8s-app: node-exporter
    name: node-exporter
    namespace: kube-system
spec:
    ports:
    - name: http
        port: 9100
        nodePort: 31672
        protocol: TCP
    type: NodePort
    selector:
        k8s-app: node-exporter

2)创建pod和service

kubectl apply -f node-exporter.yaml

3)验证
技术图片

技术图片

技术图片
可以通过node-exporter对外暴露的端口访问到metric server

#(3)部署prometheus组件

1)prometheus配置文件

# cat configmap.yaml 
apiVersion: v1
kind: ConfigMap
metadata:
    name: prometheus-config
    namespace: kube-system
data:
    prometheus.yml: |
        global:
            scrape_interval:     15s
            evaluation_interval: 15s
        scrape_configs:

        - job_name: ‘kubernetes-apiservers‘
            kubernetes_sd_configs:
            - role: endpoints
            scheme: https
            tls_config:
                ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
            - source_labels: [__meta_kubernetes_namespace, __meta_kubernetes_service_name, __meta_kubernetes_endpoint_port_name]
                action: keep
                regex: default;kubernetes;https

        - job_name: ‘kubernetes-nodes‘
            kubernetes_sd_configs:
            - role: node
            scheme: https
            tls_config:
                ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
            - action: labelmap
                regex: __meta_kubernetes_node_label_(.+)
            - target_label: __address__
                replacement: kubernetes.default.svc:443
            - source_labels: [__meta_kubernetes_node_name]
                regex: (.+)
                target_label: __metrics_path__
                replacement: /api/v1/nodes/${1}/proxy/metrics

        - job_name: ‘kubernetes-cadvisor‘
            kubernetes_sd_configs:
            - role: node
            scheme: https
            tls_config:
                ca_file: /var/run/secrets/kubernetes.io/serviceaccount/ca.crt
            bearer_token_file: /var/run/secrets/kubernetes.io/serviceaccount/token
            relabel_configs:
            - action: labelmap
                regex: __meta_kubernetes_node_label_(.+)
            - target_label: __address__
                replacement: kubernetes.default.svc:443
            - source_labels: [__meta_kubernetes_node_name]
                regex: (.+)
                target_label: __metrics_path__
                replacement: /api/v1/nodes/${1}/proxy/metrics/cadvisor

        - job_name: ‘kubernetes-service-endpoints‘
            kubernetes_sd_configs:
            - role: endpoints
            relabel_configs:
            - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scrape]
                action: keep
                regex: true
            - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_scheme]
                action: replace
                target_label: __scheme__
                regex: (https?)
            - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_path]
                action: replace
                target_label: __metrics_path__
                regex: (.+)
            - source_labels: [__address__, __meta_kubernetes_service_annotation_prometheus_io_port]
                action: replace
                target_label: __address__
                regex: ([^:]+)(?::\d+)?;(\d+)
                replacement: $1:$2
            - action: labelmap
                regex: __meta_kubernetes_service_label_(.+)
            - source_labels: [__meta_kubernetes_namespace]
                action: replace
                target_label: kubernetes_namespace
            - source_labels: [__meta_kubernetes_service_name]
                action: replace
                target_label: kubernetes_name

        - job_name: ‘kubernetes-services‘
            kubernetes_sd_configs:
            - role: service
            metrics_path: /probe
            params:
                module: [http_2xx]
            relabel_configs:
            - source_labels: [__meta_kubernetes_service_annotation_prometheus_io_probe]
                action: keep
                regex: true
            - source_labels: [__address__]
                target_label: __param_target
            - target_label: __address__
                replacement: blackbox-exporter.example.com:9115
            - source_labels: [__param_target]
                target_label: instance
            - action: labelmap
                regex: __meta_kubernetes_service_label_(.+)
            - source_labels: [__meta_kubernetes_namespace]
                target_label: kubernetes_namespace
            - source_labels: [__meta_kubernetes_service_name]
                target_label: kubernetes_name

        - job_name: ‘kubernetes-ingresses‘
            kubernetes_sd_configs:
            - role: ingress
            relabel_configs:
            - source_labels: [__meta_kubernetes_ingress_annotation_prometheus_io_probe]
                action: keep
                regex: true
            - source_labels: [__meta_kubernetes_ingress_scheme,__address__,__meta_kubernetes_ingress_path]
                regex: (.+);(.+);(.+)
                replacement: ${1}://${2}${3}
                target_label: __param_target
            - target_label: __address__
                replacement: blackbox-exporter.example.com:9115
            - source_labels: [__param_target]
                target_label: instance
            - action: labelmap
                regex: __meta_kubernetes_ingress_label_(.+)
            - source_labels: [__meta_kubernetes_namespace]
                target_label: kubernetes_namespace
            - source_labels: [__meta_kubernetes_ingress_name]
                target_label: kubernetes_name

        - job_name: ‘kubernetes-pods‘
            kubernetes_sd_configs:
            - role: pod
            relabel_configs:
            - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_scrape]
                action: keep
                regex: true
            - source_labels: [__meta_kubernetes_pod_annotation_prometheus_io_path]
                action: replace
                target_label: __metrics_path__
                regex: (.+)
            - source_labels: [__address__, __meta_kubernetes_pod_annotation_prometheus_io_port]
                action: replace
                regex: ([^:]+)(?::\d+)?;(\d+)
                replacement: $1:$2
                target_label: __address__
            - action: labelmap
                regex: __meta_kubernetes_pod_label_(.+)
            - source_labels: [__meta_kubernetes_namespace]
                action: replace
                target_label: kubernetes_namespace
            - source_labels: [__meta_kubernetes_pod_name]
                action: replace
                target_label: kubernetes_pod_name

2)创建rbac权限

# cat rbac.yaml 
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
    name: prometheus
rules:
- apiGroups: [""]
    resources:
    - nodes
    - nodes/proxy
    - services
    - endpoints
    - pods
    verbs: ["get", "list", "watch"]
- apiGroups:
    - extensions
    resources:
    - ingresses
    verbs: ["get", "list", "watch"]
- nonResourceURLs: ["/metrics"]
    verbs: ["get"]
---
apiVersion: v1
kind: ServiceAccount
metadata:
    name: prometheus
    namespace: kube-system
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
    name: prometheus
roleRef:
    apiGroup: rbac.authorization.k8s.io
    kind: ClusterRole
    name: prometheus
subjects:
- kind: ServiceAccount
    name: prometheus
    namespace: kube-system

3)创建deployment文件

 # cat prometheus-deployment.yaml 
---
apiVersion: apps/v1beta2
kind: Deployment
metadata:
    labels:
        name: prometheus-deployment
    name: prometheus
    namespace: kube-system
spec:
    replicas: 1
    selector:
        matchLabels:
            app: prometheus
    template:
        metadata:
            labels:
                app: prometheus
        spec:
            containers:
            - image: registry.cn-hangzhou.aliyuncs.com/wangfang-k8s/prometheus:v2.0.0
                name: prometheus
                command:
                - "/bin/prometheus"
                args:
                - "--config.file=/etc/prometheus/prometheus.yml"
                - "--storage.tsdb.path=/prometheus"
                - "--storage.tsdb.retention=24h"
                ports:
                - containerPort: 9090
                    protocol: TCP
                volumeMounts:
                - mountPath: "/prometheus"
                    name: data
                - mountPath: "/etc/prometheus"
                    name: config-volume
                resources:
                    requests:
                        cpu: 100m
                        memory: 100Mi
                    limits:
                        cpu: 500m
                        memory: 2500Mi
            serviceAccountName: prometheus    
            volumes:
            - name: data
                emptyDir: {}
            - name: config-volume
                configMap:
                    name: prometheus-config 

4)创建service文件

# cat prometheus.svc.yml 
---
kind: Service
apiVersion: v1
metadata:
    labels:
        app: prometheus
    name: prometheus
    namespace: kube-system
spec:
    type: NodePort
    ports:
    - port: 9090
        targetPort: 9090
        nodePort: 30003
    selector:
        app: prometheus

5)创建pod,service,rbac和configmap;

kubectl apply -f .

6)验证
技术图片

技术图片

可以通过对外暴露的接口访问prometheus数据库;
技术图片

查看状态是否成功连接到apiserver
技术图片

能够查询到metric的api接口数据
技术图片

#(4)部署grafana组件

1)准备grafana资源配置清单

# cat grafana.yaml 
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
    name: monitoring-grafana
    namespace: kube-system
spec:
    replicas: 1
    template:
        metadata:
            labels:
                task: monitoring
                k8s-app: grafana
        spec:
            containers:
            - name: grafana
                #image: gcr.io/google_containers/heapster-grafana-amd64:v4.4.3
                image: registry.cn-hangzhou.aliyuncs.com/wangfang-k8s/heapster-grafana-amd64:v4.4.3
                imagePullPolicy: IfNotPresent
                ports:
                - containerPort: 3000
                    protocol: TCP
                volumeMounts:
                - mountPath: /etc/ssl/certs
                    name: ca-certificates
                    readOnly: true
                - mountPath: /var
                    name: grafana-storage
                env:
                - name: INFLUXDB_HOST
                    value: monitoring-influxdb
                - name: GF_SERVER_HTTP_PORT
                    value: "3000"
                    # The following env variables are required to make Grafana accessible via
                    # the kubernetes api-server proxy. On production clusters, we recommend
                    # removing these env variables, setup auth for grafana, and expose the grafana
                    # service using a LoadBalancer or a public IP.
                - name: GF_AUTH_BASIC_ENABLED
                    value: "false"
                - name: GF_AUTH_ANONYMOUS_ENABLED
                    value: "true"
                - name: GF_AUTH_ANONYMOUS_ORG_ROLE
                    value: Admin
                - name: GF_SERVER_ROOT_URL
                    # If you‘re only using the API Server proxy, set this value instead:
                    # value: /api/v1/namespaces/kube-system/services/monitoring-grafana/proxy
                    value: /
            volumes:
            - name: ca-certificates
                hostPath:
                    path: /etc/ssl/certs
            - name: grafana-storage
                emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
    labels:
        # For use as a Cluster add-on (https://github.com/kubernetes/kubernetes/tree/master/cluster/addons)
        # If you are NOT using this as an addon, you should comment out this line.
        kubernetes.io/cluster-service: ‘true‘
        kubernetes.io/name: monitoring-grafana
    name: monitoring-grafana
    namespace: kube-system
spec:
    # In a production setup, we recommend accessing Grafana through an external Loadbalancer
    # or through a public IP.
    # type: LoadBalancer
    # You could also use NodePort to expose the service at a randomly-generated port
    type: NodePort
    ports:
    - port: 80
        targetPort: 3000
        nodePort: 31111
    type: NodePort
    selector:
        k8s-app: grafana

2)创建pod和service

kubectl apply -f grafana.yaml 

3)验证
技术图片

技术图片

4)进入grafana和配置数据源
技术图片

5)下载模板文件

模板文件下载地址: https://gitee.com/love-docker/k8s/raw/master/v1.11/monitor/grafana/kubernetes-cluster-monitoring-via-prometheus_rev3.json

6)导入模板文件
技术图片

选择数据库
技术图片

7)查看监控数据
技术图片

k8s监控

标签:config   cad   模板文件   ota   数据推送   replicas   setup   point   原理   

原文地址:https://blog.51cto.com/1000682/2359778

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!