标签:grok server 日志文件 etc off network cat linux prope
1.实验环境: 使用8台CentOS主机,实现filebeat+redis+logstash+els集群(3台)+kibana来完成搜索日志相关内容,目标:filebeat来完成收集本机http数据,收集完成后发送给redis,redis主要是来避免数据量过大,logstash处理不过来,logstash是用来格式化数据,将收集来的数据格式化成指定格式,els集群是将格式化完成的数据,进行文档分析,,构建索引,提供查询等操作,kibana提供图形化界面查询的组件
逻辑拓扑图
本实验所用的四个软件包全部都是5.6版本
下载相关网站:https://www.elastic.co/cn/products
配置前注意事项:1.关闭防火墙。2.关闭SELinux。3.同步时间
步骤1.实现收集httpd服务的日志文件,并将数据发送给redis服务
http+filebeat服务器相关配置
[root@filebeat ~]# yum install -y httpd
[root@filebeat ~]# echo test > /var/www/html/index.html
[root@filebeat ~]# systemctl start httpd
[root@filebeat ~]# rpm -ivh filebeat-5.6.10-x86_64.rpm
相关配置文件
/etc/filebeat/filebeat.full.yml #模板配置文件
/etc/filebeat/filebeat.yml 主配置文件
配置redis需要从模板文件中将模板复制到主配置文件中
output.redis:
enabled: true #开启
hosts: ["172.18.100.2:6379"] #redis服务器
port: 6379
key: filebeat #key的名字
password: centos #密码若没有设置则不用填
db: 0 #写入哪个数据库
datatype: list #数据类型
worker: 1 #开几个进行写数据
loadbalance: true #是否支持将多个redis中写入
[root@filebeat ~]# systemctl start filebeat
redis相关配置
[root@redis ~]# yum install -y redis
[root@redis ~]# vim /etc/redis.conf
bind 0.0.0.0
port 6379
requirepass centos
[root@nginx1 ~]# systemctl start redis
增加访问日志,在redis中查询
[root@nginx1 ~]# redis-cli -a centos
127.0.0.1:6379> KEYS *
1) "filebeat" #即可验证成功
步骤2配置logstash从redis中拿数据,并且格式化,然后存入elasticsearch,并且显示
logstash相关配置,配置该服务之前需要安装JVM相关组件
[root@nginx2 ~]# rpm -ivh logstash-5.6.10.rpm
[root@nginx2 ~]# cd /etc/logstash/conf.d/
[root@nginx2 conf.d]# vim redis-logstash-els.conf #创建文件,只要以.conf结尾即可
input {
redis {
batch_count => 1
data_type => "list"
key => "filebeat"
host => "172.18.100.2"
port => 6379
password => "centos"
threads => 5
}
}
filter {
grok {
match => {
"message" => "%{HTTPD_COMBINEDLOG}"
}
remove_field => "message"
}
date {
match => ["timestamp","dd/MM/YYYY:H:m:s Z"]
remove_field => "timestamp"
}
}
output {
stdout {
codec => rubydebug
}
}
在终端显示格式化好的内容
[root@nginx2 conf.d]# /usr/share/logstash/bin/logstash -f redis-logstash-els.conf
{
"request" => "/",
"agent" => "\"curl/7.29.0\"",
"offset" => 93516,
"auth" => "-",
"ident" => "-",
"input_type" => "log",
"verb" => "GET",
"source" => "/var/log/httpd/access_log",
"type" => "log",
"tags" => [
[0] "_dateparsefailure"
],
"referrer" => "\"-\"",
"@timestamp" => 2018-06-20T15:21:20.094Z,
"response" => "200",
"bytes" => "5",
"clientip" => "127.0.0.1",
"beat" => {
"name" => "filebeat.test.com",
"hostname" => "filebeat.test.com",
"version" => "5.6.10"
},
"@version" => "1",
"httpversion" => "1.1",
"timestamp" => "20/Jun/2018:11:21:19 -0400"
}
将output修改成传递给els集群
output {
elasticsearch {
hosts => ["http://172.18.100.4:9200/","http://172.18.100.5:9200/","http://172.18.100.6:9200/"]
index => "logstash-%{+YYYY.MM.dd}"
document_type => "apache_logs"
}
}
检查没有错误即可
[root@nginx2 conf.d]# /usr/share/logstash/bin/logstash -f redis-logstash-els.conf -t
WARNING: Could not find logstash.yml which is typically located in $LS_HOME/config or /etc/logstash. You can specify the path using --path.settings. Continuing using the defaults
Could not find log4j2 configuration at path /usr/share/logstash/config/log4j2.properties. Using default config which logs errors to the console
Configuration OK
步骤3配置els集群服务,需要先安装JVM服务
节点1:
[root@tomcat1 ~]# rpm -ivh elasticsearch-5.6.10.rpm
[root@els1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: myels
node.name: els.test.com
network.host: 172.18.100.4
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.18.100.4", "172.18.100.5","172.18.100.6"]
discovery.zen.minimum_master_nodes: 2
节点2:
[root@tomcat1 ~]# rpm -ivh elasticsearch-5.6.10.rpm
[root@els1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: myels
node.name: els.test.com
network.host: 172.18.100.5
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.18.100.4", "172.18.100.5","172.18.100.6"]
discovery.zen.minimum_master_nodes: 2
节点3:
[root@tomcat1 ~]# rpm -ivh elasticsearch-5.6.10.rpm
[root@els1 ~]# vim /etc/elasticsearch/elasticsearch.yml
cluster.name: myels
node.name: els.test.com
network.host: 172.18.100.6
http.port: 9200
discovery.zen.ping.unicast.hosts: ["172.18.100.4", "172.18.100.5","172.18.100.6"]
discovery.zen.minimum_master_nodes: 2
在els任意一个节点上查看数据
[root@els1 ~]# curl -XGET ‘http://172.18.100.4:9200/logstash-2018.06.21?pretty=true‘ 显示传过来的数据
"settings" : {
"index" : {
"refresh_interval" : "5s",
"number_of_shards" : "5",
"provided_name" : "logstash-2018.06.21",
"creation_date" : "1529545212157",
"number_of_replicas" : "1",
"uuid" : "3n74gNpCQUyCLq58vAwL6A",
"version" : {
"created" : "5061099"
}
}
}
}
}
步骤4:配置Nginx反向代理,若其中有一个故障,还可以被查询
[root@mysql1 ~]# yum install -y nginx
[root@mysql1 ~]# vim /etc/nginx/conf.d/test.conf
upstream ser {
server 172.18.100.4:9200;
server 172.18.100.5:9200;
server 172.18.100.6:9200;
}
server {
listen 80;
server_name www.test.com;
root /app/;
index index.html;
location / {
proxy_pass http://ser;
}
}
步骤5:配置kibana实现图形化查看
server.host: "0.0.0.0"
server.basePath: ""
server.name: "172.18.100.8"
elasticsearch.url: "http://172.18.100.7:80" #反向代理服务器
elasticsearch.preserveHost: true
kibana.index: ".kibana"
标签:grok server 日志文件 etc off network cat linux prope
原文地址:http://blog.51cto.com/10492754/2131477