码迷,mamicode.com
首页 > 系统相关 > 详细

Ubuntu 下安装Kibana和logstash

时间:2015-09-16 14:06:09      阅读:255      评论:0      收藏:0      [点我收藏+]

标签:

原文地址:http://www.cnblogs.com/saintaxl/p/3946667.html 

  简单来讲他具体的工作流程就是 logstash agent 监控并过滤日志,将过滤后的日志内容发给redis(这里的redis只处理队列不做存储),logstash index将日志收集在一起交给全文搜索服务ElasticSearch 可以用ElasticSearch进行自定义搜索 通过Kibana 来结合 自定义搜索进行页面展示

  • ruby 运行Kibana 必须
  • rubygems 安装ruby扩展必须
  • bundler 功能类似于yum
  • JDK 运行java程序必须 
  • redis 用来处理日志队列
  • logstash 收集、过滤日志
  • ElasticSearch 全文搜索服务(logstash集成了一个)

kibana 页面展示

首先到 logstash index服务器上面,logstash分为 index和aget ,agent负责监控、过滤日志,index负责收集日志并将日志交给ElasticSearch 做搜索此外 logstash 的收集方式分为 standalone 和 centralized。

standalone 是所有功能都在一个服务器上面,自发自收,centralized 就是集中收集,一台服务器接收所有shipper(个人理解就是logstash agent)的日志。

其实 logstash本身不分 什么 shipper 和 collector ,只不过就是配置文件不同而已,我们这次按照集中的方式来测试

这里有两台服务器

192.168.124.128 logstash index,ElasticSearch,kibana,JDK
192.168.124.132 logstash agent,redis,JDK

准备工作

安装:openssl

卸载旧版本

apt-get remove openssl
apt-get autoremove openssl

下载最新版本

wget http://www.openssl.org/source/openssl-1.0.1i.tar.gz

tar -zxvf openssl-1.0.1i.tar.gz
cd /opt/openssl-1.0.1i
./config --prefix=/usr/local/ssl
make & make install

建立软连接

ln -s /usr/local/ssl/bin/openssl /usr/bin/openssl
ln -s /usr/local/ssl/include/openssl /usr/include/openssl

刷新动态配置

vim /etc/ld.so.conf

在文末插入一行

/usr/local/ssl/lib
ldconfig -v

测试

openssl version -a

 

安装PCRE库

wget ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/pcre-8.33.tar.gz

tar -zxvf pcre-8.33.tar.gz
cd pcre-8.33
./configure --prefix=/usr/local/pcre-8.33
make & make install

 

安装zlib

wget http://zlib.net/zlib-1.2.8.tar.gz

tar -zxvf zlib-1.2.8.tar.gz
cd zlib-1.2.8
./configure --prefix=/usr/local/zlib-1.2.8
make & make install

 

安装nginx

wget http://nginx.org/download/nginx-1.6.1.tar.gz

tar -zxvf nginx-1.6.1.tar.gz
cd nginx-1.6.1
./configure --prefix=/usr/local/nginx --with-http_ssl_module --with-openssl=/opt/openssl-1.0.1i --with-pcre=/opt/pcre-8.33 --with-zlib=/opt/zlib-1.2.8

 nginx 命令

启动:/usr/local/nginx/sbin/nginx
重启:/usr/local/nginx/sbin/nginx –s reload
停止:/usr/local/nginx/sbin/nginx -s stop
查看主进程:netstat -ntlp
检查是否启动成功:netstat -ano|grep 80

 

安装ruby 运行Kibana 必须

sudo apt-get update  
wget http://cache.ruby-lang.org/pub/ruby/2.1/ruby-2.1.2.tar.gz
./configure --prefix=/usr/local/ruby
make && make install

 

环境设置

vi /etc/environment

将Ruby的路径加入环境变量 中并保存/etc/environment,如下面内容:

PATH="/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/ruby/bin"

修改了环境变量文件后,需要通过source命令让修改马上生效,命令如下:

$ source /etc/environment

 为了检查安装是否成功,可以输入下面的命令进行测试 :

$ruby –v

确认安装成功后通过一下命令添加命令链接,目前我也不清楚创建这些链接的目的是什么,按照Ruby“约定大于配置”的原则,应该是一种约定。(keyboardota)

$ sudo ln -s /usr/local/ruby/bin/ruby /usr/local/bin/ruby
$ sudo ln -s /usr/local/ruby/bin/gem /usr/bin/gem

或者:

apt-get install ruby-full

 

安装rubygems ruby扩展必须

wget http://production.cf.rubygems.org/rubygems/rubygems-2.4.1.tgz

tar -zxvf rubygems-2.4.1.tgz
cd rubygems-2.4.1
ruby setup.rb

 

安装redis 用来处理日志队列

wget http://download.redis.io/releases/redis-2.8.13.tar.gz

技术分享
tar -zxvf redis-2.8.13.tar.gz
cd redis-2.8.13
make
vim redis.conf
设置 "daemonize yes"
启动:/usr/local/redis-2.8.13/src/redis-server /usr/local/redis-2.8.13/redis.conf 
技术分享

 

安装 elasticsearch 全文搜索服务(logstash集成了一个)

wget https://download.elasticsearch.org/elasticsearch/elasticsearch/elasticsearch-1.3.2.tar.gz

技术分享
tar -zxvf elasticsearch-1.3.2.tar.gz
cd elasticsearch-1.3.2
启动:
/usr/local/elasticsearch-1.3.2/bin/elasticsearch -d 访问
http://localhost:9200
技术分享

 

安装:logstash 收集、过滤日志

wget https://download.elasticsearch.org/logstash/logstash/logstash-1.4.2.tar.gz

tar -zxvf logstash-1.4.2.tar.gz

启动

nohup /usr/local/logstash-1.4.2/bin/logstash -f /usr/local/logstash-1.4.2/agent.conf &

nohup /usr/local/logstash-1.4.2/bin/logstash -f /usr/local/logstash-1.4.2/indexer.conf &

vim /usr/local/logstash-1.4.2/agent.conf

技术分享
input {
  file {
    path => [ "/var/log/*.log", "/var/log/messages", "/var/log/syslog", "/var/log/denyhosts", "/var/log/dmesg", "/var/log/faillog", "/var/log/aptitude" ]
    start_position => beginning
  }
  file {
    type => "nginx-access"
    path => "/var/log/nginx/access.log"
  }
}

output {
  redis{
    host =>"192.168.124.128"
    data_type => "list"
    key => "logstash"
  }
}
技术分享

 vim /usr/local/logstash-1.4.2/indexer.conf

技术分享
input {
  redis {
    host => "192.168.124.128"
    data_type => "list"
    key => "logstash"
  }
}

output {
  elasticsearch {
    host => "192.168.124.132" #指定elasticsearch服务位置
  }
}
技术分享

 

安装Kibana

wget https://download.elasticsearch.org/kibana/kibana/kibana-3.1.0.tar.gz

tar -zxvf kibana-3.1.0.tar.gz
vim /usr/local/kibana-3.1.0/config.js

搜索"elasticsearch"参数,并对其进行修改以适应您的环境:

elasticsearch: "http://192.168.124.132:9200",

您还可以修改default_route参数,默认打开logstash仪表板而不是Kibana欢迎页面:

default_route     : ‘/dashboard/file/logstash.json‘,

下载配置模板

wget https://raw.github.com/elasticsearch/kibana/master/sample/nginx.conf

修改Nginx配置

vim /usr/local/nginx/conf/nginx.conf

增加Server节点

技术分享
    #
    # Nginx proxy for Elasticsearch + Kibana
    #
    # In this setup, we are password protecting the saving of dashboards. You may
    # wish to extend the password protection to all paths.
    #
    # Even though these paths are being called as the result of an ajax request, the
    # browser will prompt for a username/password on the first request
    #
    # If you use this, you‘ll want to point config.js at http://FQDN:80/ instead of
    # http://FQDN:9200
    #
    server {
      listen                *:80 ;

      server_name           localhost;
      access_log            /usr/local/nginx/logs/kibana.access.log;

      location / {
        root  /usr/local/kibana-3.1.0;
        index  index.html  index.htm;
      }

      location ~ ^/_aliases$ {
        proxy_pass http://127.0.0.1:9200;
        proxy_read_timeout 90;
      }
      location ~ ^/.*/_aliases$ {
        proxy_pass http://127.0.0.1:9200;
        proxy_read_timeout 90;
      }
      location ~ ^/_nodes$ {
        proxy_pass http://127.0.0.1:9200;
        proxy_read_timeout 90;
      }
      location ~ ^/.*/_search$ {
        proxy_pass http://127.0.0.1:9200;
        proxy_read_timeout 90;
      }
      location ~ ^/.*/_mapping {
        proxy_pass http://127.0.0.1:9200;
        proxy_read_timeout 90;
      }

      # Password protected end points
      location ~ ^/kibana-int/dashboard/.*$ {
        proxy_pass http://127.0.0.1:9200;
        proxy_read_timeout 90;
        limit_except GET {
          proxy_pass http://127.0.0.1:9200;
          auth_basic "Restricted";
          auth_basic_user_file /usr/local/nginx/kibana.myhost.org.htpasswd;
        }
      }
      location ~ ^/kibana-int/temp.*$ {
        proxy_pass http://127.0.0.1:9200;
        proxy_read_timeout 90;
        limit_except GET {
          proxy_pass http://127.0.0.1:9200;
          auth_basic "Restricted";
          auth_basic_user_file /usr/local/nginx/kibana.myhost.org.htpasswd;
        }
      }
    }
技术分享

 

如果有防火墙需要放开这些端口:

  • port 80 (for the web interface)
  • port 5544 (to receive remote syslog messages)
  • port 6379 (for the redis broker)
  • port 9200 (so the web interface can access elasticsearch)

Ubuntu 下安装Kibana和logstash

标签:

原文地址:http://www.cnblogs.com/niaowo/p/4813161.html

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!