标签:package reg mic unix inux artifact tail duration x86_64
参考:https://www.cnblogs.com/configure/p/7607302.html (kibana登陆认证[借用nginx转发])
https://www.elastic.co/cn/products(官网)
https://zhuanlan.zhihu.com/p/23049700(filebeat)
https://www.elastic.co/guide/en/logstash/current/plugins-filters-date.html#plugins-filters-date-match(logstash时间戳转换)
https://blog.csdn.net/wuyinggui10000/article/details/77879016(elk时区问题)
架构简介:
利用filebeat收集日志给logstash,logstash统一格式传递给elasticsearch,再利用kibana图形界面进行展示
系统:全都是centos 6系列版本
1.elasticsearch的安装配置:
yum install https://artifacts.elastic.co/downloads/elasticsearch/elasticsearch-6.3.0.rpm -y
vi /etc/elasticsearch/elasticsearch.yml
/etc/init.d/elasticsearch start
/etc/elasticsearch/elasticsearch.yml 配置如下:
cluster.name: lesu-elk
node.name: node-1
path.data: /home/elasticsearch #保存数据的地方
path.logs: /var/log/elasticsearch
debug:
1.elasticsearch依赖于java,所以有时候会报java相关错,此时 yum install java -y即可
2.切换数据目录是要注意权限,我这里就遇到权限不足,所以需要 chown -R elasticsearch /home/elasticsearch
测试:
curl 'http://localhost:9200/?pretty' 显示为:
{
"name" : "node-1",
"cluster_name" : "lesu-elk",
"cluster_uuid" : "goAOXrJpQLuqfoHzl7LJMg",
"version" : {
"number" : "6.3.0",
"build_flavor" : "default",
"build_type" : "rpm",
"build_hash" : "424e937",
"build_date" : "2018-06-11T23:38:03.357887Z",
"build_snapshot" : false,
"lucene_version" : "7.3.1",
"minimum_wire_compatibility_version" : "5.6.0",
"minimum_index_compatibility_version" : "5.0.0"
},
"tagline" : "You Know, for Search"
}
即表示正常
2.logstash的安装配置
rpm --import https://artifacts.elastic.co/GPG-KEY-elasticsearch
vi /etc/yum.repos.d/logstash.repo
yum install logstash -y
ln -s /usr/local/bin/logstash /usr/share/logstash/bin/logstash
ln -s /etc/logstash /usr/share/logstash/config
#默认logstash可执行文件不在linux PATH变量搜索范围内,第二条则是logstash找不到自己环境变量的配置文件,所以需要这两个软连接
vi /etc/logstash/filebeat.conf
logstash -t /etc/logstash/filebeat.conf #这个是配置文件测试语句
logstash -f /etc/logstash/filebeat.conf #这个是启动命令
/etc/yum.repos.d/logstash.repo 内容如下
[logstash-6.x]
name=Elastic repository for 6.x packages
baseurl=https://artifacts.elastic.co/packages/6.x/yum
gpgcheck=1
gpgkey=https://artifacts.elastic.co/GPG-KEY-elasticsearch
enabled=1
autorefresh=1
type=rpm-md
logstash的调试配置如下:
input {
stdin{
}
}
filter {
grok {
match => { "message" => "%{ATS}" }
}
date {
match => ["time", "yyyy-MM-dd HH:mm:ss,SSS","UNIX"]
target => "time"
}
ruby {
code => "event.set('time', event.get('time').time.localtime + 8*60*60)"
}
}
output {
stdout{
codec => rubydebug
}
}
#主要在于用stdin和stdout两个插件,用来实时输入和输出
最终的线上配置:
input {
beats {
port => "5044"
}
}
filter {
grok {
match => { "message" => "%{ATS}" }
}
date {
match => ["time", "yyyy-MM-dd HH:mm:ss,SSS","UNIX"]
target => "time"
}
ruby {
code => "event.set('time', event.get('time').time.localtime + 8*60*60)"
}
}
output {
elasticsearch {
hosts => ["127.0.0.1:9200"]
index => "%{[@metadata][beat]}-%{+YYYY.MM.dd}"
}
}
线上配置简介:
logstash可以简单看成三段式的流水线处理,输入-->数据处理-->输出
输入就是input这段,可以来自标准输入stdin、redis等等,这里因为跟filebeat结合,所以用beats监听5044端口等待来自filebeat的数据
数据处理是filter段,这里采用grok插件对数据进行分割,我的数据是自定义的squid日志,数据如下
1500047983.032 494 192.168.124.4 TCP_MISS/200 656 359 http://linzb.com/wx_auth/WechatQrcode/694a37e9c2b7616fd53119fcd7120927/2 - DIRECT/6.6.6.6 image/png
默认的grok-patterns没有现成的规则可以用,这里我根据需求只分割image/png的部分前面的部分,所以在/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/logstash-patterns-core-4.1.2/patterns/grok-patterns中添加自定义条目:
ATS %{NUMBER:time}\s+%{NUMBER:duration}\s%{IP:client_address}\s%{WORD:cache_result}/%{POSINT:status_code}\s%{NUMBER:bytes}\s%{NUMBER:bytes_source}\s%{NOTSPACE:url}\s-\s%{WORD:hierarchy_code}/%{IP:source_address}
%{NUMBER:time}: NUMBER是grok-patterns文件中定义好的匹配规则变量,这个语句大意可以理解为,过滤匹配NUMBER规则的数据,保存为time字段,其他依次类推
\s:这里代表一个空格
因为我的数据是UNIX的时间格式(而且NUMBER匹配的是数字,不是时间,所以elasticsearch也没法识别),在elasticsearch中不直观,所以我这里也用了date插件,转换时间格式
match => ["time", "yyyy-MM-dd HH:mm:ss,SSS","UNIX"] 含义是匹配time字段,UNIX代表原来的数据格式是UNIX时间
target => "time" 则表示修改后的数据存储为time字段,这里相当于修改自身了,也可以保存为其他字段,然后用remove_field插件移除原有字段
ruby那段的语句则是为了修改时区,event.set是设置时间字段,event.get是获取时间字段,具体请参考:https://blog.csdn.net/wuyinggui10000/article/details/77879016
output就很简单了,输出数据给elasticsearch,比较关键的是%{[@metadata][beat]},它可以获取来自filebeat传递的变量,这样我们就可以为不同的节点传递不同的索引,用以区分
3.filebeat的安装配置:
yum install https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-6.3.0-x86_64.rpm -y
vi /etc/filebeat/filebeat.yml
/etc/init.d/filebeat start
/etc/filebeat/filebeat.yml内容如下:
filebeat.prospectors:
- input_type: log
paths:
- /home/ats_log/squid*
output.logstash:
hosts: ["8.8.8.8:5044"]
index: 192.168.124.127
path:指明要收集哪些日志
index:这个字段就是跟前面logstash的%{[@metadata][beat]}相对应,传递给logstash处理作为elasticsearch的索引
注:filebeat用/var/lib/filebeat/registry记录采取日志的位置,所以要重新读取日志的话,就修改这里面对应日志的offset字段,或者简单点删掉这个文件,不过这时就必须先停掉filebeat
4.kibana的安装配置
yum install https://artifacts.elastic.co/downloads/kibana/kibana-6.3.0-x86_64.rpm -y
vi /etc/kibana/kibana.yml
/etc/init.d/kibana start
kibana.yml配置很简单,默认就可以主要字段如下:
server.port: 5601
server.host: "127.0.0.1"
elasticsearch.url: "http://localhost:9200"
5.为kibana添加认证功能(这里为了防止链接失效,所以直接摘抄自:https://www.cnblogs.com/configure/p/7607302.html)
vi /etc/yum.repos.d/nginx.repo
yum -y install nginx httpd-tools
mkdir -p /etc/nginx/passwd
htpasswd -c -b /etc/nginx/passwd/kibana.passwd user ******
cp /etc/nginx/conf.d/default.conf /etc/nginx/conf.d/default.conf.backup
vim /etc/nginx/conf.d/default.conf
service nginx restart
/etc/yum.repos.d/nginx.repo 内容如下:
[nginx]
name=nginx repo
baseurl=http://nginx.org/packages/centos/6/$basearch/
gpgcheck=0
enabled=1
/etc/nginx/conf.d/default.conf 内容如下:
server {
listen 80;
auth_basic "Kibana Auth";
auth_basic_user_file /etc/nginx/passwd/kibana.passwd;
location / {
proxy_pass http://127.0.0.1:5601;
proxy_redirect off;
}
}
标签:package reg mic unix inux artifact tail duration x86_64
原文地址:http://blog.51cto.com/linzb/2135687