标签:blog scribe interval sse 文档 put_user rom gen pac
参考文档https://yq.aliyun.com/articles/679721
https://www.cnblogs.com/keithtt/p/6410249.html
https://github.com/kiwigrid/helm-charts/tree/master/charts/fluentd-elasticsearch
https://github.com/kubernetes/kubernetes/tree/5d9d5bca796774a2c12d4e4443e684b619cda7ee/cluster/addons/fluentd-elasticsearch
关于kubernetes的日志分好几种,针对kubernetes本身而言有三种:
1、资源运行时的event事件。比如在k8s集群中创建pod之后,可以通过 kubectl describe pod 命令查看pod的详细信息。
2、容器中运行的应用程序自身产生的日志,比如tomcat、nginx、php的运行日志。比如kubectl logs redis-master-bobr0。这也是官方以及网上多数文章介绍的部分。
3、k8s各组件的服务日志,比如 systemctl status kubelet。容器日志收集的方式通常有以下几种:
1、容器外收集。将宿主机的目录挂载为容器的日志目录,然后在宿主机上收集。
2、容器内收集。在容器内运行一个后台日志收集服务。
3、单独运行日志容器。单独运行一个容器提供共享日志卷,在日志容器中收集日志。
4、网络收集。容器内应用将日志直接发送到日志中心,比如java程序可以使用log4j 2转换日志格式并发送到远端。
5、通过修改docker的--log-driver。可以利用不同的driver把日志输出到不同地方,将log-driver设置为syslog、fluentd、splunk等日志收集服务,然后发送到远端。
Fluentd is deployed as a DaemonSet which spawns a pod on each node that reads logs, generated by kubelet, container runtime and containers and sends them to Elasticsearch.
Fluentd被部署为一个守护进程集,在每个节点上生成一个pod,该pod读取由kubelet、容器运行时和容器生成的日志,并将它们发送到ElasticSearch。
1.下载
[root@elasticsearch01 yaml]# git clone https://github.com/kiwigrid/helm-charts
Cloning into ‘helm-charts‘...
remote: Enumerating objects: 33, done.
remote: Counting objects: 100% (33/33), done.
remote: Compressing objects: 100% (23/23), done.
remote: Total 1062 (delta 13), reused 25 (delta 10), pack-reused 1029
Receiving objects: 100% (1062/1062), 248.83 KiB | 139.00 KiB/s, done.
Resolving deltas: 100% (667/667), done.
[root@elasticsearch01 yaml]# cd helm-charts/fluentd-elasticsearch
[root@elasticsearch01 fluentd-elasticsearch]# ls
Chart.yaml OWNERS README.md templates values.yaml
2.修改values.yaml配置
主要修改fluentd镜像地址、elasticsearch地址、index前缀等信息
[root@elasticsearch01 fluentd-elasticsearch]# cat values.yaml |grep -Ev "^#|^$"
image:
repository: registry.cn-beijing.aliyuncs.com/minminmsn/fluentd-elasticsearch
tag: v2.5.2
pullPolicy: IfNotPresent
## Optionally specify an array of imagePullSecrets.
## Secrets must be manually created in the namespace.
## ref: https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
##
# pullSecrets:
# - myRegistrKeySecretName
awsSigningSidecar:
enabled: false
image:
repository: abutaha/aws-es-proxy
tag: 0.9
priorityClassName: ""
hostLogDir:
varLog: /var/log
dockerContainers: /var/lib/docker/containers
libSystemdDir: /usr/lib64
resources: {}
# limits:
# cpu: 100m
# memory: 500Mi
# requests:
# cpu: 100m
# memory: 200Mi
elasticsearch:
auth:
enabled: false
user: "yourUser"
password: "yourPass"
buffer_chunk_limit: 2M
buffer_queue_limit: 8
host: ‘10.2.8.44‘
logstash_prefix: ‘logstash‘
port: 9200
scheme: ‘http‘
ssl_version: TLSv1_2
fluentdArgs: "--no-supervisor -q"
env:
# OUTPUT_USER: my_user
# LIVENESS_THRESHOLD_SECONDS: 300
# STUCK_THRESHOLD_SECONDS: 900
secret:
rbac:
create: true
serviceAccount:
# Specifies whether a ServiceAccount should be created
create: true
# The name of the ServiceAccount to use.
# If not set and create is true, a name is generated using the fullname template
name: ""
podSecurityPolicy:
enabled: false
annotations: {}
## Specify pod annotations
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#apparmor
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#seccomp
## Ref: https://kubernetes.io/docs/concepts/policy/pod-security-policy/#sysctl
##
# seccomp.security.alpha.kubernetes.io/allowedProfileNames: ‘*‘
# seccomp.security.alpha.kubernetes.io/defaultProfileName: ‘docker/default‘
# apparmor.security.beta.kubernetes.io/defaultProfileName: ‘runtime/default‘
livenessProbe:
enabled: true
annotations: {}
podAnnotations: {}
# prometheus.io/scrape: "true"
# prometheus.io/port: "24231"
updateStrategy:
type: RollingUpdate
tolerations: {}
# - key: node-role.kubernetes.io/master
# operator: Exists
# effect: NoSchedule
affinity: {}
# nodeAffinity:
# requiredDuringSchedulingIgnoredDuringExecution:
# nodeSelectorTerms:
# - matchExpressions:
# - key: node-role.kubernetes.io/master
# operator: DoesNotExist
nodeSelector: {}
service: {}
# type: ClusterIP
# ports:
# - name: "monitor-agent"
# port: 24231
serviceMonitor:
## If true, a ServiceMonitor CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: false
interval: 10s
path: /metrics
labels: {}
prometheusRule:
## If true, a PrometheusRule CRD is created for a prometheus operator
## https://github.com/coreos/prometheus-operator
##
enabled: false
prometheusNamespace: monitoring
labels: {}
# role: alert-rules
configMaps:
useDefaults:
systemConf: true
containersInputConf: true
systemInputConf: true
forwardInputConf: true
monitoringConf: true
outputConf: true
extraConfigMaps:
# system.conf: |-
# <system>
# root_dir /tmp/fluentd-buffers/
# </system>
3.helm安装fluentd
[root@elasticsearch01 fluentd-elasticsearch]# helm install .
NAME: sanguine-dragonfly
LAST DEPLOYED: Thu Jun 6 16:07:55 2019
NAMESPACE: default
STATUS: DEPLOYED
RESOURCES:
==> v1/ServiceAccount
NAME SECRETS AGE
sanguine-dragonfly-fluentd-elasticsearch 0 0s
==> v1/ClusterRole
NAME AGE
sanguine-dragonfly-fluentd-elasticsearch 0s
==> v1/ClusterRoleBinding
NAME AGE
sanguine-dragonfly-fluentd-elasticsearch 0s
==> v1/DaemonSet
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
sanguine-dragonfly-fluentd-elasticsearch 0 0 0 0 0 <none> 0s
==> v1/ConfigMap
NAME DATA AGE
sanguine-dragonfly-fluentd-elasticsearch 6 0s
NOTES:
1. To verify that Fluentd has started, run:
kubectl --namespace=default get pods -l "app.kubernetes.io/name=fluentd-elasticsearch,app.kubernetes.io/instance=sanguine-dragonfly"
THIS APPLICATION CAPTURES ALL CONSOLE OUTPUT AND FORWARDS IT TO elasticsearch . Anything that might be identifying,
including things like IP addresses, container images, and object names will NOT be anonymized.
4.检查安装效果
[root@elasticsearch01 fluentd-elasticsearch]# kubectl get pods |grep flu
sanguine-dragonfly-fluentd-elasticsearch-hrxbp 1/1 Running 0 26m
sanguine-dragonfly-fluentd-elasticsearch-jcznt 1/1 Running 0 26m
1.Elasticsearch
elasticsearch上会生成logstash-2019.06.06样式的index,默认按天生产,前缀logstash是values.yaml配置文件里设置的
2.Kibana
Management--Create Index Pattern--logstash-2019*--Discover
标签:blog scribe interval sse 文档 put_user rom gen pac
原文地址:https://blog.51cto.com/jerrymin/2406112