码迷,mamicode.com
首页 > 其他好文 > 详细

filebeat读取nginx日志并写入kafka

时间:2018-11-29 12:36:00      阅读:462      评论:0      收藏:0      [点我收藏+]

标签:out   日志   reac   each   nginx   compress   beat   tmp   under   

filebeat写入kafka的配置:

filebeat.inputs:
- type: log
  paths:
    - /tmp/access.log
  tags: ["nginx-test"]
  fields:
    type: "nginx-test"
    log_topic: "nginxmessages"
  fields_under_root: true
processors:
- drop_fields:
    fields: ["beat","input","source","offset"]
name: 10.10.5.119
output.kafka:
  enabled: true
  hosts: ["10.78.1.85:9092","10.78.1.87:9092","10.78.1.71:9092"]
  topic: "%{[log_topic]}"
  partition.round_robin:
    reachable_only: true
  worker: 2
  required_acks: 1
  compression: gzip
  max_message_bytes: 10000000

logstash从kafka中读取的配置:

input {
    kafka {
        bootstrap_servers => "10.78.1.85:9092,10.78.1.87:9092,10.78.1.71:9092"
        topics => ["nginxmessages"]
        codec => "json"
    }
}

filebeat读取nginx日志并写入kafka

标签:out   日志   reac   each   nginx   compress   beat   tmp   under   

原文地址:http://blog.51cto.com/liuzhengwei521/2323576

(0)
(0)
   
举报
评论 一句话评论(0
登录后才能评论!
© 2014 mamicode.com 版权所有  联系我们:gaon5@hotmail.com
迷上了代码!