宁可枝头抱香死,何曾吹落北风中。这篇文章主要讲述filebeat-收集日志写入到Kafka相关的知识,希望能为你提供帮助。
filebeat 安装
root@ubuntu:/data# dpkg -i filebeat-6.8.1-amd64.deb
使用filebeat收集单个系统日志
1)测试写入本地文件
root@ubuntu:/data#grep -Ev ^$|# /etc/filebeat/filebeat.yml
--------------------------------------------------------------
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/*.log
#测试写入本地文件
output.file:
path: "/tmp"
filename: "filebeat.log"
--------------------------------------------------------------
2)写入kafka
【filebeat-收集日志写入到Kafka】
root@ubuntu:~# grep -Ev ^#|^$ /etc/filebeat/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/*.log
document_type: "nginxlog-kafka"
exclude_lines: [^DBG]
exclude_files: [.gz$]
filebeat.config.modules:
path: $path.config/modules.d/*.yml
reload.enabled: false
setup.template.settings:
index.number_of_shards: 3
setup.kibana:
output.kafka:
hosts: ["192.168.47.113:9092","192.168.47.112:9092","192.168.47.111:9092"]
topic: "nginxlog-kafka"
partition.round_robin:
reachable_only: true
required_acks: 1
compression: gzip
max_message_bytes: 1000000
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
/usr/local/kafka/bin/kafka-topics.sh\\
--list \\
--zookeeper 192.168.47.111,192.168.47.112,192.168.47.113:2181
logstash读取kafka日志到elasticsearch
input
kafka
bootstrap_servers => "192.168.47.113:9092"
topics => ["nginxlog-kafka"]
codec => json
output
elasticsearch
hosts => ["192.168.47.106:9200"]
index => "kafka-nginx-log-%+YYYY.MM.dd"
推荐阅读
- ssl
- 之配置文件说明
- LVS负载均衡群集(NET模式)
- Loaded plugins: fastestmirror, langpacks
- 从零打造一个Web地图引擎
- 微信小程序+MQTT实现远程控制家用灯泡
- sftp无法上传文件到linux处理过程
- 部署LVS-DR群集
- kubeadm 搭建多 master 高可用 K8S 集群(亲测)