标签:
ElasticSearch是一个基于Lucene的搜索服务器。它提供了一个分布式多用户能力的全文搜索引擎,基于RESTful web接口。Elasticsearch是用Java开发的,并作为Apache许可条款下的开放源码发布,是第二流行的企业搜索引擎。设计用于云计算中,能够达到实时搜索,稳定,可靠,快速,安装使用方便。
NRT
elasticsearch是一个近似实时的搜索平台,从索引文档到可搜索有些延迟,通常为1秒。
集群
集群就是一个或多个节点存储数据,其中一个节点为主节点,这个主节点是可以通过选举产生的,并提供跨节点的联合索引和搜索的功能。集群有一个唯一性标示的名字,默认是elasticsearch,集群名字很重要,每个节点是基于集群名字加入到其集群中的。因此,确保在不同环境中使用不同的集群名字。一个集群可以只有一个节点。强烈建议在配置elasticsearch时,配置成集群模式。
节点
节点就是一台单一的服务器,是集群的一部分,存储数据并参与集群的索引和搜索功能。像集群一样,节点也是通过名字来标识,默认是在节点启动时随机分配的字符名。当然啦,你可以自己定义。该名字也蛮重要的,在集群中用于识别服务器对应的节点。
节点可以通过指定集群名字来加入到集群中。默认情况下,每个节点被设置成加入到elasticsearch集群。如果启动了多个节点,假设能自动发现对方,他们将会自动组建一个名为elasticsearch的集群。
索引
索引是有几分相似属性的一系列文档的集合。如nginx日志索引、syslog索引等等。索引是由名字标识,名字必须全部小写。这个名字用来进行索引、搜索、更新和删除文档的操作。
索引相对于关系型数据库的库。
类型
在一个索引中,可以定义一个或多个类型。类型是一个逻辑类别还是分区完全取决于你。通常情况下,一个类型被定于成具有一组共同字段的文档。如ttlsa运维生成时间所有的数据存入在一个单一的名为logstash-ttlsa的索引中,同时,定义了用户数据类型,帖子数据类型和评论类型。
类型相对于关系型数据库的表。
文档
文档是信息的基本单元,可以被索引的。文档是以JSON格式表现的。
在类型中,可以根据需求存储多个文档。
虽然一个文档在物理上位于一个索引,实际上一个文档必须在一个索引内被索引和分配一个类型。
文档相对于关系型数据库的列。
分片和副本
在实际情况下,索引存储的数据可能超过单个节点的硬件限制。如一个十亿文档需1TB空间可能不适合存储在单个节点的磁盘上,或者从单个节点搜索请求太慢了。为了解决这个问题,elasticsearch提供将索引分成多个分片的功能。当在创建索引时,可以定义想要分片的数量。每一个分片就是一个全功能的独立的索引,可以位于集群中任何节点上。
分片的两个最主要原因:
a、水平分割扩展,增大存储量
b、分布式并行跨分片操作,提高性能和吞吐量
分布式分片的机制和搜索请求的文档如何汇总完全是有elasticsearch控制的,这些对用户而言是透明的。
网络问题等等其它问题可以在任何时候不期而至,为了健壮性,强烈建议要有一个故障切换机制,无论何种故障以防止分片或者节点不可用。
为此,elasticsearch让我们将索引分片复制一份或多份,称之为分片副本或副本。
副本也有两个最主要原因:
高可用性,以应对分片或者节点故障。出于这个原因,分片副本要在不同的节点上。
提供性能,增大吞吐量,搜索可以并行在所有副本上执行。
总之,每一个索引可以被分成多个分片。索引也可以有0个或多个副本。复制后,每个索引都有主分片(母分片)和复制分片(复制于母分片)。分片和副本数量可以在每个索引被创建时定义。索引创建后,可以在任何时候动态的更改副本数量,但是,不能改变分片数。
默认情况下,elasticsearch为每个索引分片5个主分片和1个副本,这就意味着集群至少需要2个节点。索引将会有5个主分片和5个副本(1个完整副本),每个索引总共有10个分片。
每个elasticsearch分片是一个Lucene索引。一个单个Lucene索引有最大的文档数LUCENE-5843, 文档数限制为2147483519(MAX_VALUE – 128)。 可通过_cat/shards来监控分片大小。
LogStash由JRuby语言编写,基于消息(message-based)的简单架构,并运行在Java虚拟机(JVM)上。不同于分离的代理端(agent)或主机端(server),LogStash可配置单一的代理端(agent)与其它开源软件结合,以实现不同的功能。
Logstash是一个完全开源的工具,他可以对你的日志进行收集、分析,并将其存储供以后使用(如,搜索),您可以使用它。说到搜索,logstash带有一个web界面,搜索和展示所有日志。
两台虚拟机:
####################### Host Information #########################
HostName :twemproxy
HostIp :192.168.201.240
####################### System Information ########################
Oprating System :CentOS release 6.6 (Final)
CPUmodel :Machine
CPUnum :1
CPUcores :2
Processor(luoji) :2
Memory :3823M
Phy_mem :10.7GB
和192.168.201.241
192.168.201.240 twemproxy
192.168.201.241 redis1
下载并安装GPG key
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
添加yum仓库
cat >>/etc/yum.repos.d/elasticsearch.repo<<EOF
[elasticsearch-2.x]
name=Elasticsearch repository for 2.x packages
baseurl=http://packages.elastic.co/elasticsearch/2.x/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
EOF
安装elasticsearch
yum install -y elasticsearch
下载并安装GPG key
rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch
添加yum仓库
cat >/etc/yum.repos.d/logstash.repo<<EOF
[logstash-2.1]
name=Logstash repository for 2.1.x packages
baseurl=http://packages.elastic.co/logstash/2.1/centos
gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch
enabled=1
EOF
安装logstash
yum install -y logstash
cd /usr/local/src
wget https://download.elastic.co/kibana/kibana/kibana-4.3.1-linux-x64.tar.gz
tar zxf kibana-4.3.1-linux-x64.tar.gz
mv kibana-4.3.1-linux-x64 /usr/local/
ln -s /usr/local/kibana-4.3.1-linux-x64/ /usr/local/kibana
yum install -y redis nginx java
四、管理配置elasticsearch
4.1 管理linux-node1的elasticsearch
修改elasticsearch配置文件,并授权
cat >> /etc/elasticsearch/elasticsearch.yml<<EOF
cluster.name: chuck-cluster
node.name: linux-node1
path.data: /data/es-data
path.logs: /var/log/elasticsearch/
bootstrap.mlockall: true
network.host: 0.0.0.0
http.port: 9200
EOF
参考以下:
[root@linux-node1 src]# grep -n ‘^[a-Z]‘ /etc/elasticsearch/elasticsearch.yml
17:cluster.name: chuck-cluster 判别节点是否是统一集群
23:node.name: linux-node1 节点的hostname
33:path.data: /data/es-data 数据存放路径
37:path.logs: /var/log/elasticsearch/ 日志路径
43:bootstrap.mlockall: true 锁住内存,使内存不会再swap中使用
54:network.host: 0.0.0.0 允许访问的ip
58:http.port: 9200 端口
[root@linux-node1 ~]# mkdir -p /data/es-data
[root@linux-node1 src]# chown elasticsearch.elasticsearch /data/es-data/
启动elasticsearch
chkconfig elasticsearch on
/etc/init.d/elasticsearch start
/etc/init.d/elasticsearch status
lsof -i :9200
分隔符======
[root@linux-node1 src]# systemctl start elasticsearch
[root@linux-node1 src]# systemctl enable elasticsearch
ln -s ‘/usr/lib/systemd/system/elasticsearch.service‘ ‘/etc/systemd/system/multi-user.target.wants/elasticsearch.service‘
[root@linux-node1 src]# systemctl status elasticsearch
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled)
Active: active (running) since Thu 2016-01-14 09:30:25 CST; 14s ago
Docs: http://www.elastic.co
Main PID: 37954 (java)
CGroup: /system.slice/elasticsearch.service
└─37954 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConc...
Jan 14 09:30:25 linux-node1 systemd[1]: Starting Elasticsearch...
Jan 14 09:30:25 linux-node1 systemd[1]: Started Elasticsearch.
[root@linux-node1 src]# netstat -lntup|grep 9200
tcp6 0 0 :::9200 :::* LISTEN 37954/java
访问9200端口,会把信息显示出来
4.2 elasticsearch进行交互
4.2.1 交互的两种方法
Java API :
node client
Transport client
RESTful API
Javascript
.NET
php
Perl
Python
Ruby
4.2.2使用RESTful API进行交互
查看当前索引和分片情况,稍后会有插件展示
自己测试机展示:
curl -i -XGET ‘http://192.168.201.240:9200/_count?pretty‘ -d ‘{"query" { "match_all": {}}}‘
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 0, 索引0个
"_shards" : { 分区0个
"total" : 0,
"successful" : 0, 成功0个
"failed" : 0 失败0个
}
}
使用head插件显示索引和分片情况
/usr/share/elasticsearch/bin/plugin install mobz/elasticsearch-head
于2016年6月21日星期二下载执行安装时遇到问题,提示下载失败,然后进行的手动安装:首先点击https://github.com/mobz/elasticsearch-head/archive/master.zip下载到本地然后上传到服务器对应目录,然后解压切入到目录将里面的内容全部移动到指定的目录:/usr/share/elasticsearch/plugins/head/_site 注意首先要创建该目录:
mkdir -p /usr/share/elasticsearch/plugins/head/_site
cd /usr/share/elasticsearch/plugins/head/_site
mv plugin-descriptor.properties ../
安装之后:输入http://192.168.201.240:9200/_plugin/head/
然后配置节点2:
cat >> /etc/elasticsearch/elasticsearch.yml<<EOF
cluster.name: chuck-cluster
node.name: linux-node2
path.data: /data/es-data
path.logs: /var/log/elasticsearch/
bootstrap.mlockall: true
network.host: 0.0.0.0
http.port: 9200
EOF
[root@linux-node2 ~]# mkdir -p /data/es-data
[root@linux-node2 src]# chown elasticsearch.elasticsearch /data/es-data/
启动elasticsearch
chkconfig elasticsearch on
/etc/init.d/elasticsearch start
/etc/init.d/elasticsearch status
lsof -i :9200
[root@linux-node2 src]# mkdir -p /usr/share/elasticsearch/plugins/head/_site
[root@linux-node2 src]# cd /usr/share/elasticsearch/plugins/head/_site
[root@linux-node2 _site]# mv /usr/local/src/elasticsearch-head-master/* ./
4.2管理linux-node2的elasticsearch
将linux-node1的配置文件拷贝到linux-node2中,并修改配置文件并授权
配置文件中cluster.name的名字一定要一致,当集群内节点启动的时候,默认使用组播(多播),寻找集群中的节点
scp /etc/elasticsearch/elasticsearch.yml 192.168.201.241:/etc/elasticsearch/elasticsearch.yml
sed -i ‘23s#node.name: twemproxy#node.name: redis1#g‘ /etc/elasticsearch/elasticsearch.yml
mkdir -p /data/es-data
chown elasticsearch.elasticsearch /data/es-data/
启动elasticsearch
[root@linux-node2 elasticsearch]# systemctl enable elasticsearch.service
ln -s ‘/usr/lib/systemd/system/elasticsearch.service‘ ‘/etc/systemd/system/multi-user.target.wants/elasticsearch.service‘
[root@linux-node2 elasticsearch]# systemctl start elasticsearch.service
[root@linux-node2 elasticsearch]# systemctl status elasticsearch.service
elasticsearch.service - Elasticsearch
Loaded: loaded (/usr/lib/systemd/system/elasticsearch.service; enabled)
Active: active (running) since Thu 2016-01-14 02:56:35 CST; 4s ago
Docs: http://www.elastic.co
Process: 38519 ExecStartPre=/usr/share/elasticsearch/bin/elasticsearch-systemd-pre-exec (code=exited, status=0/SUCCESS)
Main PID: 38520 (java)
CGroup: /system.slice/elasticsearch.service
└─38520 /bin/java -Xms256m -Xmx1g -Djava.awt.headless=true -XX:+UseParNewGC -XX:+UseConc...
Jan 14 02:56:35 linux-node2 systemd[1]: Starting Elasticsearch...
Jan 14 02:56:35 linux-node2 systemd[1]: Started Elasticsearch.
在linux-node2配置中添加如下内容,使用单播模式(尝试了使用组播,但是不生效)
[root@linux-node1 ~]# grep -n "^discovery" /etc/elasticsearch/elasticsearch.yml
79:discovery.zen.ping.unicast.hosts: ["linux-node1", "linux-node2"]
[root@linux-node1 ~]# systemctl restart elasticsearch.service
在浏览器中查看分片信息,一个索引默认被分成了5个分片,每份数据被分成了五个分片(可以调节分片数量),下图中外围带绿色框的为主分片,不带框的为副本分片,主分片丢失,副本分片会复制一份成为主分片,起到了高可用的作用,主副分片也可以使用负载均衡加快查询速度,但是如果主副本分片都丢失,则索引就是彻底丢失。
然后在linux-node2的配置文件中添加了
[root@linux-node1 ~]# grep -n "^discovery" /etc/elasticsearch/elasticsearch.yml
79:discovery.zen.ping.unicast.hosts: ["linux-node1", "linux-node2"]
192.168.201.240 linux-node1
192.168.201.241 linux-node2
4.3使用kopf插件监控elasticsearch
1. [root@linux-node1 bin]# /usr/share/elasticsearch/bin/plugin install lmenezes/elasticsearch-kopf
http://192.168.201.240:9200/_plugin/kopf/#!/cluster
出现问题:提示下载失败:
手动安装:
mkdir -p /usr/share/elasticsearch/plugins/kopf
cd /usr/share/elasticsearch/plugins/kopf
然后上传文件:下载:
https://github.com/lmenezes/elasticsearch-kopf/archive/master.zip
上传到该目录下
unzip elasticsearch-kopf-master.zip
rm -f elasticsearch-kopf-master.zip
cd elasticsearch-kopf-master/
mv ./* ../
rm -rf elasticsearch-kopf-master/
从下图可以看出节点的负载,cpu适应情况,java对内存的使用(heap usage),磁盘使用,启动时间
除此之外,kopf插件还提供了REST API 等,类似kopf插件的还有bigdesk,但是bigdesk目前还不支持2.1!!!安装bigdesk的方法如下
1. /usr/share/elasticsearch/bin/plugin install lukas-vlcek/bigdesk
4.4node间组播通信和分片
当第一个节点启动,它会组播发现其他节点,发现集群名字一样的时候,就会自动加入集群。随便一个节点都是可以连接的,并不是主节点才可以连接,连接的节点起到的作用只是汇总信息展示
最初可以自定义设置分片的个数,分片一旦设置好,就不可以改变。主分片和副本分片都丢失,数据即丢失,无法恢复,可以将无用索引删除。有些老索引或者不常用的索引需要定期删除,否则会导致es资源剩余有限,占用磁盘大,搜索慢等。如果暂时不想删除有些索引,可以在插件中关闭索引,就不会占用内存了。
五、配置logstash
5.1循序渐进学习logstash
启动一个logstash,-e:在命令行执行;input输入,stdin标准输入,是一个插件;output输出,stdout:标准输出
/opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{} }‘ Default filter workers: 1
以下是显示输出:
Logstash startup completed
chuck ==>手动输入
2016-01-14T06:01:07.184Z linux-node1 chuck ==>输出
www.chuck-blog.com ==>手动输入
2016-01-14T06:01:18.581Z linux-node1 www.chuck-blog.com ==>输出
使用rubudebug显示详细输出,codec为一种编解码器
[root@linux-node1 bin]# /opt/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{ codec => rubydebug} }‘
Settings: Default filter workers: 1
Logstash startup completed
chuck ==>手动输入
{
"message" => "chuck",
"@version" => "1",
"@timestamp" => "2016-01-14T06:07:50.117Z",
"host" => "linux-node1"
} ==>使用rubydebug输出
上述每一条输出的内容称为一个事件,多个相同的输出的内容合并到一起称为一个事件(举例:日志中连续相同的日志输出称为一个事件)!
使用logstash将信息写入到elasticsearch
[root@linux-node1 bin]# /opt/logstash/bin/logstash -e ‘input { stdin{} } output { elasticsearch { hosts => ["192.168.201.240:9200"] } }‘
Settings: Default filter workers: 1
Logstash startup completed
maliang
chuck
chuck-blog.com
www.chuck-bllog.com
在elasticsearch中查看logstash新加的索引
在elasticsearch中写一份,同时在本地输出一份,也就是在本地保留一份文本文件,也就不用在elasticsearch中再定时备份到远端一份了。此处使用的保留文本文件三大优势:1)文本最简单 2)文本可以二次加工 3)文本的压缩比最高
[root@linux-node1 bin]# /opt/logstash/bin/logstash -e ‘input { stdin{} } output { elasticsearch { hosts => ["192.168.201.240:9200"] } stdout{ codec => rubydebug } }‘
Settings: Default filter workers: 1
Logstash startup completed
www.google.com
{
"message" => "www.google.com",
"@version" => "1",
"@timestamp" => "2016-01-14T06:27:49.014Z",
"host" => "linux-node1"
}
www.elastic.co
{
"message" => "www.elastic.co",
"@version" => "1",
"@timestamp" => "2016-01-14T06:27:58.058Z",
"host" => "linux-node1"
}
使用logstash启动一个配置文件,会在elasticsearch中写一份
[root@linux-node1 ~]# cat normal.conf
input { stdin { } }
output {
elasticsearch { hosts => ["localhost:9200"] }
stdout { codec => rubydebug }
}
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f normal.conf
Settings: Default filter workers: 1
Logstash startup completed
123
{
"message" => "123",
"@version" => "1",
"@timestamp" => "2016-01-14T06:51:13.411Z",
"host" => "linux-node1
5.2学习编写conf格式
1. input {
2. file {
3. path => "/var/log/messages"
4. type => "syslog"
5. }
6.
7. file {
8. path => "/var/log/apache/access.log"
9. type => "apache"
10. }
11. }
1. path => ["/var/log/messages","/var/log/*.log"]
2. path => ["/data/mysql/mysql.log"]
1. ssl_enable => true
1. my_bytes => "1113" # 1113 bytes
2. my_bytes => "10MiB" # 10485760 bytes
3. my_bytes => "100kib" # 102400 bytes
4. my_bytes => "180 mb" # 180000000 bytes
1. match => {
2. "field1" => "value1"
3. "field2" => "value2"
4. ...
5. }
1. port => 33
1. my_password => "password"
5.3 学习编写input的file插件
5.3.1 input插件之input
sincedb_path:记录logstash读取位置的路径
start_postion :包括beginning和end,指定收集的位置,默认是end,从尾部开始
add_field 加一个域
discover_internal 发现间隔,每隔多久收集一次,默认15秒
5.4 学习编写output的file插件
5.5 通过input和output插件编写conf文件
5.5.1 收集系统日志的conf
[root@linux-node1 ~]# cat system.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
}
output {
elasticsearch {
hosts => ["192.168.201.240:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f system.conf
5.5.2 收集elasticsearch的error日志
此处把上个system日志和这个error(java程序日志)日志,放在一起。使用if判断,两种日志分别写到不同索引中.此处的type(固定的就是type,不可更改)不可以和日志格式的任何一个域(可以理解为字段)的名称重复,也就是说日志的域不可以有type这个名称。
[root@linux-node1 ~]# cat all.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/elasticsearch/chuck-cluster.log"
type => "es-error"
start_position => "beginning"
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.201.240:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => ["192.168.201.240:9200"]
index => "es-error-%{+YYYY.MM.dd}"
}
}
}
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f all.conf
5.6 把多行整个报错收集到一个事件中
5.6.1举例说明
以at.org开头的内容都属于同一个事件,但是显示在不同行,这样的日志格式看起来很不方便,所以需要把他们合并到一个事件中
5.6.2引入codec的multiline插件
官方文档提供
1. input {
2. stdin {
3. codec => multiline {
4. ` pattern => "pattern, a regexp"
5. negate => "true" or "false"
6. what => "previous" or "next"`
7. }
8. }
9. }
regrxp:使用正则,什么情况下把多行合并起来
negate:正向匹配和反向匹配
what:合并到当前行还是下一行
在标准输入和标准输出中测试以证明多行收集到一个日志成功
[root@linux-node1 ~]# cat muliline.conf
input {
stdin {
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
output {
stdout {
codec => "rubydebug"
}
}
[root@linux-node1 ~]# /opt/logstash/bin/logstash -f muliline.conf
Settings: Default filter workers: 1
Logstash startup completed
[1
[2
{
"@timestamp" => "2016-01-15T06:46:10.712Z",
"message" => "[1",
"@version" => "1",
"host" => "linux-node1"
}
chuck
chuck-blog.com
123456
[3
{
"@timestamp" => "2016-01-15T06:46:16.306Z",
"message" => "[2\nchuck\nchuck-bloh\nchuck-blog.com\n123456",
"@version" => "1",
"tags" => [
[0] "multiline"
],
"host" => "linux-node1"
继续将上述实验结果放到all.conf的es-error索引中
[root@linux-node1 ~]# cat all.conf
input {
file {
path => "/var/log/messages"
type => "system"
start_position => "beginning"
}
file {
path => "/var/log/elasticsearch/chuck-clueser.log"
type => "es-error"
start_position => "beginning"
codec => multiline {
pattern => "^\["
negate => true
what => "previous"
}
}
}
output {
if [type] == "system" {
elasticsearch {
hosts => ["192.168.201.240:9200"]
index => "system-%{+YYYY.MM.dd}"
}
}
if [type] == "es-error" {
elasticsearch {
hosts => ["192.168.201.240:9200"]
index => "es-error-%{+YYYY.MM.dd}"
}
}
}
六、熟悉kibana
6.1 编辑kinaba配置文件使之生效
yum install screen -y
[root@linux-node1 ~]# grep ‘^[a-Z]‘ /usr/local/kibana/config/kibana.yml
server.port: 5601 kibana端口
server.host: "0.0.0.0" 对外服务的主机
elasticsearch.url: "http://192.168.201.240:9200" 和elasticsearch联系
kibana.index: ".kibana 在elasticsearch中添加.kibana索引
一个screen,并启动kibana
1. [root@linux-node1 ~]# screen
2. [root@linux-node1 ~]# /usr/local/kibana/bin/kibana
3. 使用crtl +a+d退出screen
使用浏览器打开192.168.201.240:5601
6.2 验证error的muliline插件生效
在kibana中添加一个es-error索引
注意在使用kibana之前首先要运行logstash和kibana用screen进行后台运行。
可以看到默认的字段
选择discover查看
验证error的muliline插件生效
七、logstash收集nginx、syslog和tcp日志
7.1收集nginx的访问日志
在这里使用codec的json插件将日志的域进行分段,使用key-value的方式,使日志格式更清晰,易于搜索,还可以降低cpu的负载
更改nginx的配置文件的日志格式,使用json
1. [root@linux-node1 ~]# sed -n ‘15,33p‘ /etc/nginx/nginx.conf
2. log_format main ‘$remote_addr - $remote_user [$time_local] "$request" ‘
3. ‘$status $body_bytes_sent "$http_referer" ‘
4. ‘"$http_user_agent" "$http_x_forwarded_for"‘;
5. log_format json ‘{ "@timestamp": "$time_local", ‘
6. ‘"@fields": { ‘
7. ‘"remote_addr": "$remote_addr", ‘
8. ‘"remote_user": "$remote_user", ‘
9. ‘"body_bytes_sent": "$body_bytes_sent", ‘
10. ‘"request_time": "$request_time", ‘
11. ‘"status": "$status", ‘
12. ‘"request": "$request", ‘
13. ‘"request_method": "$request_method", ‘
14. ‘"http_referrer": "$http_referer", ‘
15. ‘"body_bytes_sent":"$body_bytes_sent", ‘
16. ‘"http_x_forwarded_for": "$http_x_forwarded_for", ‘
17. ‘"http_user_agent": "$http_user_agent" } }‘;
18. # access_log /var/log/nginx/access_json.log main;
19. access_log /var/log/nginx/access.log json;
启动nginx
1. [root@linux-node1 ~]# nginx -t
2. nginx: the configuration file /etc/nginx/nginx.conf syntax is ok
3. nginx: configuration file /etc/nginx/nginx.conf test is successful
4. [root@linux-node1 ~]# nginx
5. [root@linux-node1 ~]# netstat -lntup|grep 80
6. tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 43738/nginx: master
7. tcp6 0 0 :::80 :::* LISTEN 43738/nginx: master
日志格式显示如下
使用logstash将nginx访问日志收集起来,继续写到all.conf中
将nginx-log加入kibana中并显示
7.2 收集系统syslog日志
前文中已经使用文件file的形式收集了系统日志/var/log/messages,但是实际生产环境是需要使用syslog插件直接收集
修改syslog的配置文件,把日志信息发送到514端口上
1. [root@linux-node1 ~]# vim /etc/rsyslog.conf
2. 90 *.* @@192.168.201.240:514
将system-syslog放到all.conf中,启动all.conf
1. [root@linux-node1 ~]# cat all.conf
2. input {
3. syslog {
4. type => "system-syslog"
5. host => "192.168.201.240"
6. port => "514"
7. }
8. file {
9. path => "/var/log/messages"
10. type => "system"
11. start_position => "beginning"
12. }
13. file {
14. path => "/var/log/nginx/access_json.log"
15. codec => json
16. start_position => "beginning"
17. type => "nginx-log"
18. }
19.
20. file {
21. path => "/var/log/elasticsearch/chuck-cluster.log"
22. type => "es-error"
23. start_position => "beginning"
24. codec => multiline {
25. pattern => "^\["
26. negate => true
27. what => "previous"
28. }
29. }
30. }
31. output {
32.
33. if [type] == "system" {
34. elasticsearch {
35. hosts => ["192.168.201.240:9200"]
36. index => "system-%{+YYYY.MM.dd}"
37. }
38. }
39. if [type] == "es-error" {
40. elasticsearch {
41. hosts => ["192.168.201.240:9200"]
42. index => "es-error-%{+YYYY.MM.dd}"
43. }
44. }
45. if [type] == "nginx-log" {
46. elasticsearch {
47. hosts => ["192.168.201.240:9200"]
48. index => "nginx-log-%{+YYYY.MM.dd}"
49. }
50. }
51. if [type] == "system-syslog" {
52. elasticsearch {
53. hosts => ["192.168.201.240:9200"]
54. index => "system-syslog-%{+YYYY.MM.dd}"
55. }
56. }
57. }
58. [root@linux-node1 ~]# /opt/logstash/bin/logstash -f all.conf
在elasticsearch插件中就可见到增加的system-syslog索引
7.3 收集tcp日志
编写tcp.conf
1. [root@linux-node1 ~]# cat tcp.conf
2. input {
3. tcp {
4. host => "192.168.201.240"
5. port => "6666"
6. }
7. }
8. output {
9. stdout {
10. codec => "rubydebug"
11. }
12. }
使用nc对6666端口写入数据
1. [root@linux-node1 ~]# nc 192.168.201.240 6666 </var/log/yum.log
将信息输入到tcp的伪设备中
1. [root@linux-node1 ~]# echo "chuck" >/dev/tcp/192.168.201.240/6666
八、logstash解耦之消息队列
8.1 图解使用消息队列架构
数据源Datasource把数据写到input插件中,output插件使用消息队列把消息写入到消息队列Message Queue中,Logstash indexing Instance启动logstash使用input插件读取消息队列中的信息,Fliter插件过滤后在使用output写入到elasticsearch中。
如果生产环境中不适用正则grok匹配,可以写Python脚本从消息队列中读取信息,输出到elasticsearch中
8.2 上图架构的优点
九、引入redis到架构中
9.1 使用redis收集logstash的信息
修改redis的配置文件并启动redis
1. [root@linux-node1 ~]# vim /etc/redis.conf
2. 37 daemonize yes
3. 65 bind 192.168.201.240
4. [root@linux-node1 ~]# systemctl start redis
5. [root@linux-node1 ~]# netstat -lntup|grep 6379
6. tcp 0 0 192.168.201.240:6379 0.0.0.0:* LISTEN 45270/redis-server
编写redis.conf
1. [root@linux-node1 ~]# cat redis-out.conf
2. input{
3. stdin{
4. }
5. }
6. output{
7. redis{
8. host => "192.168.201.240"
9. port => "6379"
10. db => "6"
11. data_type => "list" # 数据类型为list
12. key => "demo"
13. }
启动配置文件输入信息
1. [root@linux-node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf
2. Settings: Default filter workers: 1
3. Logstash startup completed
4. chuck
5. chuck-blog
使用redis-cli连接到redis并查看输入的信息
1. [root@linux-node1 ~]# redis-cli -h 192.168.201.240
2. 192.168.201.240:6379> info #输入info查看信息
3. # Server
4. redis_version:2.8.19
5. redis_git_sha1:00000000
6. redis_git_dirty:0
7. redis_build_id:c0359e7aa3798aa2
8. redis_mode:standalone
9. os:Linux 3.10.0-229.el7.x86_64 x86_64
10. arch_bits:64
11. multiplexing_api:epoll
12. gcc_version:4.8.3
13. process_id:45270
14. run_id:83f428b96e87b7354249fe42bd19ee8a8643c94e
15. tcp_port:6379
16. uptime_in_seconds:1111
17. uptime_in_days:0
18. hz:10
19. lru_clock:10271973
20. config_file:/etc/redis.conf
21. # Clients
22. connected_clients:2
23. client_longest_output_list:0
24. client_biggest_input_buf:0
25. blocked_clients:0
26. # Memory
27. used_memory:832048
28. used_memory_human:812.55K
29. used_memory_rss:5193728
30. used_memory_peak:832048
31. used_memory_peak_human:812.55K
32. used_memory_lua:35840
33. mem_fragmentation_ratio:6.24
34. mem_allocator:jemalloc-3.6.0
35. # Persistence
36. loading:0
37. rdb_changes_since_last_save:0
38. rdb_bgsave_in_progress:0
39. rdb_last_save_time:1453112484
40. rdb_last_bgsave_status:ok
41. rdb_last_bgsave_time_sec:0
42. rdb_current_bgsave_time_sec:-1
43. aof_enabled:0
44. aof_rewrite_in_progress:0
45. aof_rewrite_scheduled:0
46. aof_last_rewrite_time_sec:-1
47. aof_current_rewrite_time_sec:-1
48. aof_last_bgrewrite_status:ok
49. aof_last_write_status:ok
50. # Stats
51. total_connections_received:2
52. total_commands_processed:2
53. instantaneous_ops_per_sec:0
54. total_net_input_bytes:164
55. total_net_output_bytes:9
56. instantaneous_input_kbps:0.00
57. instantaneous_output_kbps:0.00
58. rejected_connections:0
59. sync_full:0
60. sync_partial_ok:0
61. sync_partial_err:0
62. expired_keys:0
63. evicted_keys:0
64. keyspace_hits:0
65. keyspace_misses:0
66. pubsub_channels:0
67. pubsub_patterns:0
68. latest_fork_usec:9722
69. # Replication
70. role:master
71. connected_slaves:0
72. master_repl_offset:0
73. repl_backlog_active:0
74. repl_backlog_size:1048576
75. repl_backlog_first_byte_offset:0
76. repl_backlog_histlen:0
77. # CPU
78. used_cpu_sys:1.95
79. used_cpu_user:0.40
80. used_cpu_sys_children:0.00
81. used_cpu_user_children:0.00
82. # Keyspace
83. db6:keys=1,expires=0,avg_ttl=0
84. 192.168.201.240:6379> select 6 #选择db6
85. OK
86. 192.168.201.240:6379[6]> keys * #选择demo这个key
87. 1) "demo"
88. 192.168.201.240:6379[6]> LINDEX demo -2 #查看消息
89. "{\"message\":\"chuck\",\"@version\":\"1\",\"@timestamp\":\"2016-01-18T10:21:23.583Z\",\"host\":\"linux-node1\"}"
90. 192.168.201.240:6379[6]> LINDEX demo -1 #查看消息
91. "{\"message\":\"chuck-blog\",\"@version\":\"1\",\"@timestamp\":\"2016-01-18T10:25:54.523Z\",\"host\":\"linux-node1\"}"
为了下一步写input插件到把消息发送到elasticsearch中,多在redis中写入写数据
1. [root@linux-node1 ~]# /opt/logstash/bin/logstash -f redis-out.conf
2. Settings: Default filter workers: 1
3. Logstash startup completed
4. chuck
5. chuck-blog
6. a
7. b
8. c
9. d
10. e
11. f
12. g
13. h
14. i
15. j
16. k
17. l
18. m
19. n
20. o
21. p
22. q
23. r
24. s
25. t
26. u
27. v
28. w
29. x
30. y
31. z
查看redis中名字为demo的key长度
1. 192.168.201.240:6379[6]> llen demo
2. (integer) 28
9.2 使用redis发送消息到elasticsearch中
编写redis-in.conf
1. [root@linux-node1 ~]# cat redis-in.conf
2. input{
3. redis {
4. host => "192.168.201.240"
5. port => "6379"
6. db => "6"
7. data_type => "list"
8. key => "demo"
9. }
10. }
11. output{
12. elasticsearch {
13. hosts => ["192.168.201.240:9200"]
14. index => "redis-demo-%{+YYYY.MM.dd}"
15. }
16. }
启动配置文件
1. [root@linux-node1 ~]# /opt/logstash/bin/logstash -f redis-in.conf
2. Settings: Default filter workers: 1
3. Logstash startup completed
不断刷新demo这个key的长度(读取很快,刷新一定要速度)
1. 192.168.201.240:6379[6]> llen demo
2. (integer) 28
3. 192.168.201.240:6379[6]> llen demo
4. (integer) 28
5. 192.168.201.240:6379[6]> llen demo
6. (integer) 19 #可以看到redis的消息正在写入到elasticsearch中
7. 192.168.201.240:6379[6]> llen demo
8. (integer) 7 #可以看到redis的消息正在写入到elasticsearch中
9. 192.168.201.240:6379[6]> llen demo
10. (integer) 0
在elasticsearch中查看增加了redis-demo
9.3 将all.conf的内容改为经由redis
编写shipper.conf作为redis收集logstash配置文件
1. [root@linux-node1 ~]# cp all.conf shipper.conf
2. [root@linux-node1 ~]# vim shipper.conf
3. input {
4. syslog {
5. type => "system-syslog"
6. host => "192.168.201.240"
7. port => "514"
8. }
9. tcp {
10. type => "tcp-6666"
11. host => "192.168.201.240"
12. port => "6666"
13. }
14. file {
15. path => "/var/log/messages"
16. type => "system"
17. start_position => "beginning"
18. }
19. file {
20. path => "/var/log/nginx/access_json.log"
21. codec => json
22. start_position => "beginning"
23. type => "nginx-log"
24. }
25. file {
26. path => "/var/log/elasticsearch/chuck-cluster.log"
27. type => "es-error"
28. start_position => "beginning"
29. codec => multiline {
30. pattern => "^\["
31. negate => true
32. what => "previous"
33. }
34. }
35. }
36. output {
37. if [type] == "system" {
38. redis {
39. host => "192.168.201.240"
40. port => "6379"
41. db => "6"
42. data_type => "list"
43. key => "system"
44. }
45. }
46. if [type] == "es-error" {
47. redis {
48. host => "192.168.201.240"
49. port => "6379"
50. db => "6"
51. data_type => "list"
52. key => "es-error"
53. }
54. }
55. if [type] == "nginx-log" {
56. redis {
57. host => "192.168.201.240"
58. port => "6379"
59. db => "6"
60. data_type => "list"
61. key => "nginx-log"
62. }
63. }
64. if [type] == "system-syslog" {
65. redis {
66. host => "192.168.201.240"
67. port => "6379"
68. db => "6"
69. data_type => "list"
70. key => "system-syslog"
71. }
72. }
73. if [type] == "tcp-6666" {
74. redis {
75. host => "192.168.201.240"
76. port => "6379"
77. db => "6"
78. data_type => "list"
79. key => "tcp-6666"
80. }
81. }
82. }
在redis中查看keys
1. 192.168.201.240:6379[6]> select 6
2. OK
3. 192.168.201.240:6379[6]> keys *
4. 1) "system"
5. 2) "nginx-log"
6. 3) "tcp-6666"
编写indexer.conf作为redis发送elasticsearch配置文件
1. [root@linux-node1 ~]# cat indexer.conf
2. input {
3. redis {
4. type => "system-syslog"
5. host => "192.168.201.240"
6. port => "6379"
7. db => "6"
8. data_type => "list"
9. key => "system-syslog"
10. }
11. redis {
12. type => "tcp-6666"
13. host => "192.168.201.240"
14. port => "6379"
15. db => "6"
16. data_type => "list"
17. key => "tcp-6666"
18. }
19. redis {
20. type => "system"
21. host => "192.168.201.240"
22. port => "6379"
23. db => "6"
24. data_type => "list"
25. key => "system"
26. }
27. redis {
28. type => "nginx-log"
29. host => "192.168.201.240"
30. port => "6379"
31. db => "6"
32. data_type => "list"
33. key => "nginx-log"
34. }
35. redis {
36. type => "es-error"
37. host => "192.168.201.240"
38. port => "6379"
39. db => "6"
40. data_type => "list"
41. key => "es-error"
42. }
43. }
44. output {
45. if [type] == "system" {
46. elasticsearch {
47. hosts => "192.168.201.240"
48. index => "system-%{+YYYY.MM.dd}"
49. }
50. }
51. if [type] == "es-error" {
52. elasticsearch {
53. hosts => "192.168.201.240"
54. index => "es-error-%{+YYYY.MM.dd}"
55. }
56. }
57. if [type] == "nginx-log" {
58. elasticsearch {
59. hosts => "192.168.201.240"
60. index => "nginx-log-%{+YYYY.MM.dd}"
61. }
62. }
63. if [type] == "system-syslog" {
64. elasticsearch {
65. hosts => "192.168.201.240"
66. index => "system-syslog-%{+YYYY.MM.dd}"
67. }
68. }
69. if [type] == "tcp-6666" {
70. elasticsearch {
71. hosts => "192.168.201.240"
72. index => "tcp-6666-%{+YYYY.MM.dd}"
73. }
74. }
75. }
启动shipper.conf
1. [root@linux-node1 ~]# /opt/logstash/bin/logstash -f shipper.conf
2. Settings: Default filter workers: 1
由于日志量小,很快就会全部被发送到elasticsearch,key也就没了,所以多写写数据到日志中
1. [root@linux-node1 ~]# for n in `seq 10000` ;do echo $n >>/var/log/elasticsearch/chuck-cluster.log;done
2. [root@linux-node1 ~]# for n in `seq 10000` ;do echo $n >>/var/log/nginx/access_json.log;done
3. [root@linux-node1 ~]# for n in `seq 10000` ;do echo $n >>/var/log/messages;done
查看key的长度看到key在增长
1. (integer) 2481
2. 192.168.201.240:6379[6]> llen system
3. (integer) 2613
4. 192.168.201.240:6379[6]> llen system
5. (integer) 2795
6. 192.168.201.240:6379[6]> llen system
7. (integer) 2960
启动indexer.conf
1. [root@linux-node1 ~]# /opt/logstash/bin/logstash -f indexer.conf
2. Settings: Default filter workers: 1
3. Logstash startup completed
查看key的长度看到key在减小
1. 192.168.201.240:6379[6]> llen nginx-log
2. (integer) 9680
3. 192.168.201.240:6379[6]> llen nginx-log
4. (integer) 9661
5. 192.168.201.240:6379[6]> llen nginx-log
6. (integer) 9661
7. 192.168.201.240:6379[6]> llen system
8. (integer) 9591
9. 192.168.201.240:6379[6]> llen system
10. (integer) 9572
11. 192.168.201.240:6379[6]> llen system
12. (integer) 9562
kibana查看nginx-log索引
十、学习logstash的fliter插件
10.1 熟悉grok
前文学习了input和output插件,在这里学习fliter插件
filter插件有很多,在这里就学习grok插件,使用正则匹配日志里的域来拆分。在实际生产中,apache日志不支持jason,就只能使用grok插件匹配;mysql慢查询日志也是无法拆分,只能石油grok正则表达式匹配拆分。
在如下链接,github上有很多写好的grok模板,可以直接引用
https://github.com/logstash-plugins/logstash-patterns-core/blob/master/patterns/grok-patterns
在装好的logstash中也会有grok匹配规则,直接可以引用,路径如下
1. [root@linux-node1 patterns]# pwd
2. /opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-patterns-core-2.0.2/patterns
10.2 根据官方文档提供的编写grok.conf
1. [root@linux-node1 ~]# cat grok.conf
2. input {
3. stdin {}
4. }
5. filter {
6. grok {
7. match => { "message" => "%{IP:client} %{WORD:method} %{URIPATHPARAM:request} %{NUMBER:bytes} %{NUMBER:duration}" }
8. }
9. }
10. output {
11. stdout {
12. codec => "rubydebug"
13. }
14. }
启动logstash,并根据官方文档提供输入,可得到拆分结果如下显示
10.3 使用logstash收集mysql慢查询日志
倒入生产中mysql的slow日志,示例格式如下:
1. # Time: 160108 15:46:14
2. # User@Host: dev_select_user[dev_select_user] @ [192.168.97.86] Id: 714519
3. # Query_time: 1.638396 Lock_time: 0.000163 Rows_sent: 40 Rows_examined: 939155
4. SET timestamp=1452239174;
5. SELECT DATE(create_time) as day,HOUR(create_time) as h,round(avg(low_price),2) as low_price
6. FROM t_actual_ad_num_log WHERE create_time>=‘2016-01-07‘ and ad_num<=10
7. GROUP BY DATE(create_time),HOUR(create_time);
使用multiline处理,并编写slow.conf
1. [root@linux-node1 ~]# cat mysql-slow.conf
2. input{
3. file {
4. path => "/root/slow.log"
5. type => "mysql-slow-log"
6. start_position => "beginning"
7. codec => multiline {
8. pattern => "^# User@Host:"
9. negate => true
10. what => "previous"
11. }
12. }
13. }
14. filter {
15. # drop sleep events
16. grok {
17. match => { "message" =>"SELECT SLEEP" }
18. add_tag => [ "sleep_drop" ]
19. tag_on_failure => [] # prevent default _grokparsefailure tag on real records
20. }
21. if "sleep_drop" in [tags] {
22. drop {}
23. }
24. grok {
25. match => [ "message", "(?m)^# User@Host: %{USER:user}\[[^\]]+\] @ (?:(?<clienthost>\S*) )?\[(?:%{IP:clientip})?\]\s+Id: %{NUMBER:row_id:int}\s*# Query_time: %{NUMBER:query_time:float}\s+Lock_time: %{NUMBER:lock_time:float}\s+Rows_sent: %{NUMBER:rows_sent:int}\s+Rows_examined: %{NUMBER:rows_examined:int}\s*(?:use %{DATA:database};\s*)?SET timestamp=%{NUMBER:timestamp};\s*(?<query>(?<action>\w+)\s+.*)\n#\s*" ]
26. }
27. date {
28. match => [ "timestamp", "UNIX" ]
29. remove_field => [ "timestamp" ]
30. }
31. }
32. output {
33. stdout{
34. codec => "rubydebug"
35. }
36. }
执行该配置文件,查看grok正则匹配结果
十一、生产如何上线ELK。
10.1日志分类
系统日志 rsyslog logstash syslog插件
访问日志 nginx logstash codec json
错误日志 file logstash file+ mulitline
运行日志 file logstash codec json
设备日志 syslog logstash syslog插件
debug日志 file logstash json or mulitline
10.2 日志标准化
1)路径固定标准化
2)格式尽量使用json
10.3日志收集步骤
系统日志开始->错误日志->运行日志->访问日志
原文链接:http://www.chuck-blog.com/chuck/201.html
标签:
原文地址:http://www.cnblogs.com/kai2016/p/5951139.html