标签:elasticsearch logstash kibana centos6.5
上海资邦集团战略发展部
目录
3.1 Logstash的concurrent过期提示... 25
有了上面的介绍,接下来就可以实施部署ELK了。在部署之前,还需要做一些准备工作。
接下来的测试将会在一台机器上做测试,操作系统为CentOS6.5 64位操作系统。
1. 操作系统:CentOS6.5 64位
2. 关闭iptables
3. 关闭SELinux
主机名 | IP地址 |
linux-node01.lavenliu.com | 192.168.20.141 |
linux-node02.lavenliu.com | 192.168.20.138 |
操作系统的安装这里省略之。建议最小化安装操作系统。
替换系统原有的yum源,更改为阿里的yum源,这样下载软件包的速度会比较快,
# mv /etc/yum.repos.d/CentOS-Base.repo /etc/yum.repos.d/CentOS-Base.repo.backup
# wget -O /etc/yum.repos.d/CentOS-Base.repo http://mirrors.aliyun.com/repo/Centos-6.repo
# yum makecache # 这一步可省略
ntpdate ntp1.aliyun.com
date
echo "*/5 * * * * /usr/sbin/ntpdate ntp1.aliyun.com &> /dev/null" >> /var/spool/cron/root
# crontab -l
vm.max_map_count=262144
fs.file-max=65535
vm.swapness = 1
修改/etc/security/limits.conf配置文件,在文件的末尾增加如下两行:
* soft nofile 65535
* hard nofile 65535
然后重新退出当前系统登录,使用如下命令验证:
# ulimit -n
65535
ElasticSearch需要JDK环境,JDK的版本建议是1.8.0_25以上的版本。编辑/etc/profile配置文件,在文件的末尾加入如下几行:
JAVA_HOME=/usr/local/jdk
JRE_HOME=${JAVA_HOME}/jre
CLASS_PATH=".:${JAVA_HOME}/lib:${JRE_HOME}/lib"
PATH=".:$JAVA_HOME/bin:$PATH"
export JAVA_HOME
export JRE_HOME
之后,重新读取/etc/profile配置文件,让配置生效,
# . /etc/profil
# java -version
java version "1.8.0_65"
Java(TM) SE Runtime Environment (build 1.8.0_65-b17)
Java HotSpot(TM) 64-Bit Server VM (build 25.65-b01, mixed mode)
至此,Java环境已经配置完毕。
准备ELK所需的软件包,软件包如下表所示:
软件名 | 版本号 | 被安装主机 | 安装位置 |
JDK | 1.8.0_65 | linux-node01、linux-node02 | /usr/local/jdk |
ElasticSearch | 1.7.0 | linux-node01、linux-node02 | /usr/local/elasticsearch |
LogStash | 1.5.3 | linux-node01、linux-node02 | /usr/local/logstash |
Kibana | 4.1.1 | linux-node01、linux-node02 | /usr/local/kibana |
Redis | 2.4.10 | linux-node01、linux-node02 | yum默认安装位置 |
这里假设已经准备好以上软件,并放到了/home/lavenliu/tools目录下,
# ll
total 306020
-rw-r--r-- 1 root root 28501532 Jan 15 11:20 elasticsearch-1.7.0.tar.gz
-rw-r--r-- 1 root root 181260798 Nov 13 11:28 jdk-8u65-linux-x64.tar.gz
-rw-r--r-- 1 root root 11676499 Jan 15 11:20 kibana-4.1.1-linux-x64.tar.gz
-rw-r--r-- 1 root root 91914390 Jan 15 11:22 logstash-1.5.3.tar.gz
# wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.7.0.tar.gz
下载完毕,解压就可以运行了,
# tar -xf elasticsearch-1.7.0.tar.gz -C /usr/local/
# ln -s /usr/local/elasticsearch-1.7.0/ /usr/local/elasticsearch
下载ElasticSearch的启动脚本,以服务的形式管理ElasticSearch,
# wget https://github.com/elasticsearch/elasticsearch-servicewrapper/archive/master.tar.gz
# tar -xf master.tar.gz
# mv elasticsearch-servicewrapper-master/service/ /usr/local/elasticsearch/bin/
# /usr/local/elasticsearch/bin/service/elasticsearch install
Detected RHEL or Fedora:
Installing the Elasticsearch daemon..
检查/etc/init.d/目录下是否已经存在elasticsearch启动脚本,
# ls /etc/init.d/elasticsearch
/etc/init.d/elasticsearch
# chkconfig --list |grep elastic
elasticsearch 0:off 1:off 2:on 3:on 4:on 5:on 6:off
接下来配置ElasticSearch,配置文件为/usr/local/elasticsearch/config/elasticsearch.yml,修改配置如下:
/usr/local/elasticsearch/config/elasticsearch.yml
# 如果在一个网络中存在多个集群环境,cluster.name名字要唯一
# 通过组播的方式
# 第32行
cluster.name: lavenliu
# 第40行
# 节点名名称:可以使用主机名
node.name: "linux-node1"
# 第47行
# 节点是否可以被选举为主节点
node.master: true
# 第51行
# 是否存储数据
node.data: true
# 第107行
# 默认分片为5个,分片后便于管理
index.number_of_shards: 5
# 第111行
# 分片的副本数量
index.number_of_replicas: 1
## 路径相关的配置
# 第145行
path.conf: /usr/local/elasticsearch/config
# 第149行
# ES索引文件存放位置
path.data: /usr/local/elasticsearch/data
# 第159行
# ES临时文件的存放路径
path.work: /usr/local/elasticsearch/work
# 第163行
# ES日志文件的存放路径
path.logs: /usr/local/elasticsearch/logs
# 第167行
# ES插件存放的目录
path.plugins: /usr/local/elasticsearch/plugins
## 内存相关的配置
# 第184行
bootstrap.mlockall: true
配置完毕,接下来就可以启动ES了,
# /etc/init.d/elasticsearch start
Starting Elasticsearch...
Waiting for Elasticsearch................
running: PID:19257
查看ES是否启动成功,
# netstat -antup | grep -E "9200|9300"
tcp 0 0 :::9200 :::* LISTEN 18482/java
tcp 0 0 :::9300 :::* LISTEN 18482/java
在命令行使用curl命令进行简单的命令行验证,
# curl http://192.168.20.141:9200
{
"status" : 200,
"name" : "linux-node01",
"cluster_name" : "lavenliu",
"version" : {
"number" : "1.7.0",
"build_hash" : "929b9739cae115e73c346cb5f9a6f24ba735a743",
"build_timestamp" : "2015-07-16T14:31:07Z",
"build_snapshot" : false,
"lucene_version" : "4.10.4"
},
"tagline" : "You Know, for Search"
}
由于ES是Java语言实现的,关于JVM相关参数的调优可以根据实际情况修改/usr/local/elasticsearch/bin/service/elasticsearch.conf配置文件。
ElasticSearch服务已经启动,怎么与ElasticSearch进行交互呢?可以使用Java API和RESTful API。
# curl -i -XGET ‘http://192.168.20.141:9200/_count?pretty‘ -d ‘
{
"query": {
match_all":{}
}
}
‘
HTTP/1.1 200 OK
Content-Type: application/json; charset=UTF-8
Content-Length: 95
{
"count" : 0,
"_shards" : {
"total" : 0,
"successful" : 0,
"failed" : 0
}
}
上面的命令行使用方式对于运维人员来说还可以。
在linux-node01上进行安装Marvel插件,
# /usr/local/elasticsearch/bin/plugin -i elasticsearch/marvel/latest
-> Installing elasticsearch/marvel/latest...
Trying http://download.elasticsearch.org/elasticsearch/marvel/marvel-latest.zip...
Downloading ..............................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Installed elasticsearch/marvel/latest into /usr/local/elasticsearch/plugins/marvel
安装完毕marvel插件,可以无需重启ES服务,打开浏览器进行验证。在浏览器里输入:http://192.168.20.141:9200/_plugin/marvel,打开后界面如下:
接下来点击右上角的"Dashboard->Sense",如下图所示:
在左边窗口输入:
POST /myindex-demo/test
{
"user": "lavenliu",
"mesg": "hello, lavenliu!"
}
在右边窗口得到:
{
"_index": "myindex-demo",
"_type": "test",
"_id": "AVRB99EIHo5TBrBPNTZj",
"_version": 1,
"created": true
}
如下图所示:
接下来使用GET来查询上面刚创建的myindex-demo索引,在左边窗口进行操作,
GET /myindex-demo/test/AVRB99EIHo5TBrBPNTZj
在右边窗口得到:
{
"_index": "myindex-demo",
"_type": "test",
"_id": "AVRB99EIHo5TBrBPNTZj",
"_version": 1,
"found": true,
"_source": {
"user": "lavenliu",
"mesg": "hello, lavenliu!"
}
}
在左边窗口继续查询:
GET /myindex-demo/test/AVRB99EIHo5TBrBPNTZj/_source
在右边窗口得到:
{
"user": "lavenliu",
"mesg": "hello, lavenliu!"
}
接下来进行全文搜索,使用"?"表示全文搜索,如在左边的窗口进行操作:
GET /myindex-demo/test/_search?q=hello
在右边窗口得到如下输出:
{
"took": 355,
"timed_out": false,
"_shards": {
"total": 5,
"successful": 5,
"failed": 0
},
"hits": {
"total": 1,
"max_score": 0.15342641,
"hits": [
{
"_index": "myindex-demo",
"_type": "test",
"_id": "AVRB99EIHo5TBrBPNTZj",
"_score": 0.15342641,
"_source": {
"user": "lavenliu",
"mesg": "hello, lavenliu!"
}
}
]
}
}
在linux-node01上进行安装head插件,
# /usr/local/elasticsearch/bin/plugin -i mobz/elasticsearch-head
-> Installing mobz/elasticsearch-head...
Trying https://github.com/mobz/elasticsearch-head/archive/master.zip...
Downloading .......................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................DONE
Installed mobz/elasticsearch-head into /usr/local/elasticsearch/plugins/head
安装完毕,无需重启ES即可使用head,在浏览器进行输入URL:http://192.168.20.141:9200/_plugin/head,显示如下界面:
配置完毕了linux-node02机器时,并启动ES,再次刷新head界面,如下图所示:
集群健康值:
绿色:表示所有分片正常运行
黄色:表示所有主分片正常运行,没有丢失,而副本分片有丢失
红色:表主分片有丢失
也可以通过命令行使用curl来获得集群的健康状态:
# curl -XGET http://192.168.20.141:9200/_cluster/health?pretty
{
"cluster_name" : "lavenliu", # 集群名称
"status" : "green", # 集群状态
"timed_out" : false,
"number_of_nodes" : 2, # 节点数量
"number_of_data_nodes" : 2, # 有两个数据节点
"active_primary_shards" : 10, # 有两个索引,每个索引5个分片
"active_shards" : 20, # 默认分片都有一个副本,所以10个分片有10个副本
"relocating_shards" : 0, # 正在迁移的分片数量
"initializing_shards" : 0, # 正在初始化的分片数量
"unassigned_shards" : 0, # 未分配的分片数量
"delayed_unassigned_shards" : 0,
"number_of_pending_tasks" : 0,
"number_of_in_flight_fetch" : 0
}
这里使用yum的 方式安装Redis,在两台机器上进行yum安装即可,命令如下:
# yum install -y redis
在linux-node02上配置Redis,并让其成为一个Redis服务器,而linux-node01上只需安装Redis软件而不启动Redis服务,我们只是在linux-node01上使用redis-cli客户端命令而已。
接下来在linux-node02上配置Redis,修改/etc/redis.conf配置文件,修改"bind 127.0.0.1"为"bind 192.168.20.138",然后保存即可。然后启动linux-node02的Redis服务,
# /etc/init.d/redis start
检查Redis是否启动成功及Redis的服务端口是否监听,检查如下:
# netstat -antup |grep redis
0 192.168.20.138:6379 0.0.0.0:* LISTEN 1659/redis-server
# ps -ef | grep redis | grep -v grep
1 0 20:17 ? 00:00:00 /usr/sbin/redis-server /etc/redis.conf
有了以上输出表明Redis服务端已设置完毕,接下来在linux-node01上是否可以通过redis-cli命令连接到linux-node02上呢?操作如下:
# redis-cli -h 192.168.20.138 -p 6379
redis 192.168.20.138:6379> info # 输入一个info命令查看
redis_version:2.4.10
redis_git_sha1:00000000
redis_git_dirty:0
arch_bits:64
multiplexing_api:epoll
gcc_version:4.4.6
process_id:22291
uptime_in_seconds:10
uptime_in_days:0
lru_clock:1446540
used_cpu_sys:0.00
used_cpu_user:0.00
used_cpu_sys_children:0.00
used_cpu_user_children:0.00
connected_clients:1
connected_slaves:0
client_longest_output_list:0
client_biggest_input_buf:0
blocked_clients:0
used_memory:726128
used_memory_human:709.11K
used_memory_rss:1597440
used_memory_peak:726056
used_memory_peak_human:709.04K
mem_fragmentation_ratio:2.20
mem_allocator:jemalloc-2.2.5
loading:0
aof_enabled:0
changes_since_last_save:0
bgsave_in_progress:0
last_save_time:1461500270
bgrewriteaof_in_progress:0
total_connections_received:1
total_commands_processed:0
expired_keys:0
evicted_keys:0
keyspace_hits:0
keyspace_misses:0
pubsub_channels:0
pubsub_patterns:0
latest_fork_usec:0
vm_enabled:0
role:master
redis 192.168.20.138:6379>
有了以上输出,说明Redis的客户端连接Redis服务端也是没有问题的。
以下是使用预编译的二进制包进行安装的,
# wget https://download.elasticsearch.org/logstash/logstash/logstash-1.5.3.tar.gz
解压缩并设置软链接,
# tar -xf logstash-1.5.3.tar.gz -C /usr/local
# ln -s /usr/local/logstash-1.5.3 /usr/local/logstash
以上两个步骤完毕,可以在命令行进行简单的验证,验证LogStash是否可用:
# /usr/local/logstash/bin/logstash -e ‘input { stdin{} } output { stdout{} }‘
hehe # 这里进行输入
Logstash startup completed
2016-04-23T13:05:16.200Z gluster01.lavenliu.com hehe # 程序的输出
hello lavenliu # 我们的输入
2016-04-23T13:05:32.541Z gluster01.lavenliu.com hello lavenliu
so far so good # 我们的输入
2016-04-23T13:05:46.757Z gluster01.lavenliu.com so far so good
^C # 这里使用Ctrl+C组合键进,让程序终止了
在配置LogStash之前,需要了解LogStash的配置文件结构,
# This is a comment. You should use comments to describe
# parts of your configuration.
input {
...
}
filter {
...
}
output {
...
}
接下来就配置LogStash几种常用的使用场景。建议在操作过程中,请每个SSH 连接至少开启3个终端,可以方便地查看结果。接下来的4个示例如果没有特殊说明,则均在linux-node01上进行的操作。
# cat /etc/logstash_to_file.conf
input {
file {
path => "/var/log/messages"
}
}
output {
file {
path => "/tmp/%{+YYYY-MM-dd}-messages.gz"
gzip => true
}
}
如何使用呢?在命令行使用logstash -f指定配置文件即可,在第1个终端中执行如下操作:
# /usr/local/logstash/bin/logstash -f /etc/logstash_to_file.conf
Logstash startup completed
在第2个终端中执行如下操作:
# for i in {1..100} ; do echo "hello ${i}" >> /var/log/messages ; sleep 1 ; done
这时可以在第3个终端进行查看/tmp目录是否产生/etc/logstash_to_file.conf里定义的压缩文件,
# ll /tmp
total 16
-rw-r--r-- 1 root root 3974 Apr 24 16:22 2016-04-24-messages.gz
srwxr-xr-x 1 root root 0 Apr 10 16:31 590815f70355102012534af7c62eac5a.socket
srwxr-xr-x 1 root root 0 Apr 10 16:31 c5c1153da9a89d0510f88e2bbc5a2d8c.socket
drwxr-xr-x 2 root root 4096 Apr 24 16:12 hsperfdata_root
drwxr-xr-x 2 root root 4096 Apr 23 14:25 jna-3506402
drwx------ 2 root root 4096 Apr 23 19:25 vmware-root
# cat /etc/logstash_to_es.conf
input {
file {
path => "/var/log/messages"
}
}
output {
elasticsearch {
host => "192.168.20.141"
protocol => "http"
index => "system-messages-%{+YYYY.MM.dd}"
}
}
跟上面的执行方式一样,在第1个终端执行:
# /usr/local/logstash/bin/logstash -f /etc/logstash_to_file.conf
Logstash startup completed
在第2个终端上往/var/log/messages日志文件里写数据:
# for i in {1..100} ; do echo "hello ${i}" >> /var/log/messages ; sleep 1 ; done
使用浏览器打开ElasticSearch的head插件,URL为http://192.168.20.141:9200/_plugin/head/,打开后的界面如下:
准备配置文件/etc/logstah_to_redis.conf,内容如下:
# cat /etc/logstash_to_redis.conf
input {
file {
path => "/var/log/messages"
}
}
output {
redis {
data_type => "list"
key => "system-messages"
host => "192.168.20.138"
port => "6379"
db => "1"
}
}
接下来在第1个终端中,使用命令行的方式启动logstash(或者准备logstash的启动脚本也行),命令如下:
# /usr/local/logstash/bin/logstash -f /etc/logstash_to_redis.conf
Logstash startup completed
在第2个终端进行模拟日志的增加,操作如下:
# for i in {1..1000} ; do echo "hello ${i}" >> /var/log/messages ; sleep 0.5 ; done
在第3个终端中登录linux-node02的Redis,查看数据是否写入成功,操作如下:
# redis-cli -h 192.168.20.138 -p 6379
redis 192.168.20.138:6379> select 1
OK
redis 192.168.20.138:6379[1]> KEYS *
1) "system-messages" # 有输出说明数据写入到redis的配置是没有问题的
redis 192.168.20.138:6379[1]>
redis 192.168.20.138:6379[1]> LLEN system-messages
(integer) 2799
redis 192.168.20.138:6379[1]> LINDEX system-messages -1
"{\"message\":\"hello 1000\",\"@version\":\"1\",\"@timestamp\":\"2016-04-24T12:47:39.348Z\",\"host\":\"gluster01.lavenliu.com\",\"path\":\"/var/log/messages\"}"
redis 192.168.20.138:6379[1]> LLEN system-messages
(integer) 3009
本示例是在上一个示例的基础上进行的扩展,该场景是两个Logstash实例,一个Logstash实例(linux-node01)的input使用file模块作为输入,output使用redis模块(把数据存储到linux-node02上的Redis);另一个Logstash实例(linux-node02)的input使用redis模块作为输入,output使用elasticsearch模块输出到ElasticSearch集群。这样的架构是一个解耦的架构,数据不直接写到ES,这样即使ES服务宕机,数据还保存在redis中,因为数据在redis中以list形式存储,如果redis的list中的数据不被弹出,数据就还在,除非redis数据过大把内存撑爆。
以下的操作是在linux-node02上完成的。接下来在linux-nod02上配置/etc/logstash_redis_to_es.conf,内容如下:
# cat /etc/logstash_redis_to_es.conf
input {
redis {
data_type => "list"
key => "system-messages"
host => "192.168.20.138"
port => "6379"
db => "1"
}
}
output {
elasticsearch {
host => "192.168.20.141"
protocol => "http"
index => "system-redis-messages-%{+YYYY.MM.dd}"
}
}
有了配置文件,接下来启动linux-nod02上的logstash,启动方式与前面3个示例的启动方式一样。在第一个终端中操作命令如下:
# /usr/local/logstash/bin/logstash -f /etc/logstash_redis_to_es.conf
‘[DEPRECATED] use `require ‘concurrent‘` instead of `require ‘concurrent_ruby‘`
Logstash startup completed
在第2个终端中进行操作,查看redis中的数据是否还有,
# redis-cli -h 192.168.20.138 -p 6379
redis 192.168.20.138:6379> select 1
OK
redis 192.168.20.138:6379[1]> KEYS *
1) "system-messages"
redis 192.168.20.138:6379[1]> LLEN system-messages
(integer) 0 # 可以看到redis中的数据已被ES读取走
接下来,打开浏览器,登录URL:http://192.168.20.141:9200/_plugin/head/ (如果按照此文档进行安装及配置,此URL不一定跟你的一样)。界面如下,ES集群中已经看到刚才我们设置的索引了,如下图所示:
有了以上的结果,说明这个架构配置成功。
接下来介绍如何使用ES收集linux-node01上Nginx的访问日志,在Nginx上需要配置访问日志的格式,我们这里使用json的格式。修改Nginx的配置如下:
log_format logstash_json ‘{"@timestamp":"$time_iso8601",‘
‘"host": "$server_addr",‘
‘"client": "$remote_addr",‘
‘"size": $body_bytes_sent,‘
‘"responsetime": $request_time,‘
‘"domain": "$host",‘
‘"url": "$uri",‘
‘"referer": "$http_referer",‘
‘"agent": "$http_user_agent",‘
‘"status": "$status"}‘;
接下来在http的server字段内添加如下配置:
access_log logs/access_json.log logstash_json;
配置完毕,检查Nginx的配置语法是否有问题,如果没有问题可以启动Nginx,
# /usr/local/nginx/sbin/nginx -t
nginx: the configuration file /usr/local/nginx-1.9.4/conf/nginx.conf syntax is ok
nginx: configuration file /usr/local/nginx-1.9.4/conf/nginx.conf test is successful
# 没有问题就启动之
# /usr/local/nginx/sbin/nginx
实现方法同上面的一样,接下来看配置文件,linux-node01上的配置文件,
# cat /etc/logstash_nginx_access_log_to_redis.conf
input {
file {
path => "/usr/local/nginx/logs/access_json.log"
codec => "json"
}
}
output {
redis {
data_type => "list"
key => "nginx-access-log"
host => "192.168.20.138"
port => "6379"
db => "2"
}
}
启动linux-node01节点上的Logstash,
# /usr/local/logstash/bin/logstash -f /etc/logstash_nginx_access_log_to_redis.conf
linux-node02节点上的配置文件,
# cat /etc/logstash_nginx_access_log_to_redis.conf
input {
redis {
data_type => "list"
key => "nginx-access-log"
host => "192.168.20.138"
port => "6379"
db => "2"
}
}
output {
elasticsearch {
host => "192.168.20.141"
protocol => "http"
index => "nginx-access-log-%{+YYYY.MM.dd}"
}
}
在linux-node02节点上启动Logstash,
# /usr/local/logstash/bin/logstash -f /etc/logstash_nginx_access_log_to_redis.conf
接着使用ab工具进行对Nginx的访问,如果没有ab工具,可以进行安装之,
# yum install -y httpd-tools
# ab -n1000 -c10 http://192.168.20.158/
This is ApacheBench, Version 2.3 <$Revision: 655654 $>
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/
Licensed to The Apache Software Foundation, http://www.apache.org/
Benchmarking 192.168.20.158 (be patient)
Completed 100 requests
Completed 200 requests
Completed 300 requests
Completed 400 requests
Completed 500 requests
Completed 600 requests
Completed 700 requests
Completed 800 requests
Completed 900 requests
Completed 1000 requests
Finished 1000 requests
Server Software: nginx/1.9.4
Server Hostname: 192.168.20.158
Server Port: 81
Document Path: /
Document Length: 612 bytes
Concurrency Level: 10
Time taken for tests: 0.130 seconds
Complete requests: 1000
Failed requests: 0
Write errors: 0
Total transferred: 844844 bytes
HTML transferred: 612612 bytes
Requests per second: 7664.95 [#/sec] (mean)
Time per request: 1.305 [ms] (mean)
Time per request: 0.130 [ms] (mean, across all concurrent requests)
Transfer rate: 6323.91 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 2
Processing: 0 1 2.7 1 25
Waiting: 0 1 2.7 1 25
Total: 0 1 2.7 1 25
Percentage of the requests served within a certain time (ms)
50% 1
66% 1
75% 1
80% 1
90% 2
95% 4
98% 9
99% 25
100% 25 (longest request)
可以在head界面下看到结果了,结果如下图,
Kibana是ElasticSearch的一个可视化展示平台。接下来在linux-node01上进行安装,
# wget https://download.elasticsearch.org/kibana/kibana/kibana-4.1.1.tar.gz
下载之后进行解压缩及设置软链接,操作如下:
# tar -xf kibana-4.1.1-linux-x64.tar.gz -C /usr/local/
# ln -s /usr/local/kibana-4.1.1-linux-x64 /usr/local/kibana
接下来修改Kibana的配置文件,
# cd /usr/local/kibana/config
# 修改elasticsearch_url的值为如下配置:
elasticsearch_url: "http://192.168.20.141:9200"
接下来就可以启动Kibana了,操作如下:
# nohup /usr/local/kibana/bin/kibana &
Kibana默认监听在5601端口,可以验证Kibana是否启动成功,操作如下:
# ps -ef |grep kibana | grep -v grep
root 16145 13443 0 22:15 pts/1 00:00:01 ./bin/../node/bin/node ./bin/../src/bin/kibana.js
# netstat -antup |grep 5601
tcp 0 0 0.0.0.0:5601 0.0.0.0:* LISTEN 16145/./bin/../node
tcp 0 0 192.168.20.141:5601 192.168.20.1:52243 ESTABLISHED 16145/./bin/../node
tcp 0 0 192.168.20.141:5601 192.168.20.1:52244 ESTABLISHED 16145/./bin/../node
tcp 0 0 192.168.20.141:5601 192.168.20.1:52245 ESTABLISHED 16145/./bin/../node
tcp 0 0 192.168.20.141:5601 192.168.20.1:52242 ESTABLISHED 16145/./bin/../node
启动没有问题,可以在浏览器里进行访问了,在浏览器的地址栏里输入“http://192.168.20.141:5601”,界面如下:
接下来配置Kibana,用来展示上个小节的Nginx的日志,
基于时间戳的
动态生成索引,
点击“Create”之后,进入如下界面,
如果有多个索引,可以继续添加,
时间过滤器在右上角,
接下来搜索数据,默认显示前500个文档,而且文档的排序是倒序的。
可以进行搜索,在搜索框里进行输入"status:404",还可以进行组合搜索,如“status:404 OR status:200”。还可以进行范围搜索,如“status:[400 TO 499]”
接下来进行出图,
Kibana可以参考的文档网站:kibana.logstash.es
这应该是一个Logstash的INFO级别的提示信息,不影响Logstash服务的运行。提示信息如下:
‘[DEPRECATED] use `require ‘concurrent‘` instead of `require ‘concurrent_ruby‘`
本文出自 “固态U盘” 博客,请务必保留此出处http://lavenliu.blog.51cto.com/5060944/1787495
标签:elasticsearch logstash kibana centos6.5
原文地址:http://lavenliu.blog.51cto.com/5060944/1787495