标签:efk
k8s集群之日志收集EFK架构
参考文档
http://tonybai.com/2017/03/03/implement-kubernetes-cluster-level-logging-with-fluentd-and-elasticsearch-stack/
https://github.com/kubernetes/kubernetes/tree/master/cluster/addons/fluentd-elasticsearch
https://t.goodrain.com/t/k8s/242
http://logz.io/blog/kubernetes-log-analysis/
http://blog.csdn.net/gsying1474/article/details/52426366
http://www.cnblogs.com/zhangjiayong/p/6203025.html
http://stackoverflow.com/questions/41686681/fluentd-es-v1-22-daemonset-doesnt-create-any-pod
下列文档简单的系统的测试了k8s 1.5.x系列:包括部署集群、创建POD、域名解析、仪表盘、监控、反向代理、存储、日志,另外双向认证自己建证书不太实用就没有列出。本系列文档环境部署使用二进制程序绿色安装,适用于1.5.2、1.5.3、1.5.4及后续版本,只是记得随时更新github上样例url即可。
k8s集群安装部署
http://jerrymin.blog.51cto.com/3002256/1898243
k8s集群RC、SVC、POD部署
http://jerrymin.blog.51cto.com/3002256/1900260
k8s集群组件kubernetes-dashboard和kube-dns部署
http://jerrymin.blog.51cto.com/3002256/1900508
k8s集群监控组件heapster部署
http://jerrymin.blog.51cto.com/3002256/1904460
k8s集群反向代理负载均衡组件部署
http://jerrymin.blog.51cto.com/3002256/1904463
k8s集群挂载volume之nfs
http://jerrymin.blog.51cto.com/3002256/1906778
k8s集群挂载volume之glusterfs
http://jerrymin.blog.51cto.com/3002256/1907274
k8s集群日志收集ELK架构
http://jerrymin.blog.51cto.com/3002256/1907282
技术实现
本文使用k8s官方推荐方案,就说说集群启动时会在每个机器启动一个Fluentd agent收集日志然后发送给Elasticsearch。
实现方式是每个agent挂载目录/var/lib/docker/containers使用fluentd的tail插件扫描每个容器日志文件,直接发送给Elasticsearch。
提前下载好镜像
[root@k8s-node1 ~]# docker pull gcr.io/google_containers/elasticsearch:v2.4.1
[root@k8s-node1 ~]# docker pull gcr.io/google_containers/kibana:v4.6.1
[root@k8s-node1 ~]# docker pull gcr.io/google_containers/fluentd-elasticsearch:1.22
[root@k8s-node1 ~]# docker images |grep el
registry.access.redhat.com/rhel7/pod-infrastructure latest 34d3450d733b 5 weeks ago 205 MB
gcr.io/google_containers/fluentd-elasticsearch 1.22 7896bdf952bf 8 weeks ago 266.2 MB
gcr.io/google_containers/elasticsearch
[root@k8s-master fluentd-elasticsearch]# pwd
/usr/local/kubernetes/cluster/addons/fluentd-elasticsearch
[root@k8s-master fluentd-elasticsearch]# ls
es-controller.yaml es-service.yaml fluentd-es-image kibana-image
es-image fluentd-es-ds.yaml kibana-controller.yaml kibana-service.yaml
先创建elasticsearch和kibanan
[root@k8s-master fluentd-elasticsearch]# kubectl create -f es-controller.yaml
[root@k8s-master fluentd-elasticsearch]# kubectl create -f es-service.yaml
[root@k8s-master fluentd-elasticsearch]# kubectl create -f kibana-controller.yaml
[root@k8s-master fluentd-elasticsearch]# kubectl create -f kibana-service.yaml
最后创建fluentd
[root@k8s-master fluentd-elasticsearch]# kubectl create -f fluentd-es-ds.yaml
error: error validating "fluentd-es-ds.yaml": error validating data: found invalid field tolerations for v1.PodSpec; if you choose to ignore these errors, turn
validation off with --validate=false
且注释这三行
#tolerations:
#- key : "node.alpha.kubernetes.io/ismaster"
#effect: "NoSchedule"
创建fluentd报错,查考解决
I found the solution after studying https://github.com/kubernetes/kubernetes/blob/master/cluster/addons/fluentd-elasticsearch/fluentd-es-ds.yaml
There is nodeSelector: set as alpha.kubernetes.io/fluentd-ds-ready: "true"
But nodes doesn‘t have a label like that. What I did is add the label as below to one node to check whether it‘s working.
kubectl label nodes {node_name} alpha.kubernetes.io/fluentd-ds-ready="true"
[root@k8s-master fluentd-elasticsearch]# kubectl label nodes k8s-node1 alpha.kubernetes.io/fluentd-ds-ready="true"
node "k8s-node1" labeled
[root@k8s-master fluentd-elasticsearch]# kubectl label nodes k8s-node2 alpha.kubernetes.io/fluentd-ds-ready="true"
node "k8s-node2" labeled
[root@k8s-master fluentd-elasticsearch]# kubectl label nodes k8s-node3 alpha.kubernetes.io/fluentd-ds-ready="true"
node "k8s-node3" labeled
[root@k8s-master fluentd-elasticsearch]# kubectl get pods -n kube-system |grep fluentd
fluentd-es-v1.22-95ht2 1/1 Running 0 1m
fluentd-es-v1.22-k905f 1/1 Running 0 1m
fluentd-es-v1.22-w9q88 1/1 Running 0 1m
点击创建kibana日志即可
http://172.17.3.20:8080/api/v1/proxy/namespaces/kube-system/services/kibana-logging/app/kibana#
默认index是logstash-*
标签:efk
原文地址:http://jerrymin.blog.51cto.com/3002256/1907282