莫问天涯路几重,轻衫侧帽且从容。这篇文章主要讲述企业级ELK-架构与部署亲测可用!相关的知识,希望能为你提供帮助。
前言
大家好,还记得小堂的上一篇文章吗???《企业级ELK-架构与部署(一)亲测可用!》??,不记得的小伙伴可以点击链接复习哈,里面详细介绍了ELK的架构、优势与kibana + elasticsearch的部署步骤。
废话不多说,今天依旧直接上干货——关于ELK中的logstash、kafka与filebeat的部署。
实战
搭建环境与各主机角色说明。
节点选择:广东(VPC网络更安全、SSD磁盘性能高)
云主机配置:4核16G(4核8G也支持,但会有延时感)
网络选择:VPC虚拟私有云(VPC网络更安全、高效)
带宽:5M
系统版本:Centos7.6
云主机数量:5
软件版本:ELK 7.4.0、kafka 2.12-2.6.0
首先,下面为部署logstash的步骤。
a.步骤1
登陆logstash节点。ssh 到 192.168.0.6
b.步骤2
cd /opt/
wget https://artifacts.elastic.co/downloads/logstash/logstash-7.4.0.tar.gz
【企业级ELK-架构与部署亲测可用!】c.步骤3
tar -zxvf logstash-7.4.0.tar.gz
mkdir -p /opt/els/logs/logs
d.步骤4
vi /opt/logstash-7.4.0/config/logstash.yml
path.logs: /opt/els/logs/logs
path.config: /opt/logstash-7.4.0/conf.d/*.conf
xpack.monitoring.enabled: true
xpack.monitoring.elasticsearch.username: "elastic"
xpack.monitoring.elasticsearch.password: "123456"
xpack.monitoring.elasticsearch.hosts: ["http://ES-node1:9200","http://ES-node2:9201","http://ES-node3:9202"]
e.步骤5
新建配置文件,配置文件作用为自定义日志格式。
vi /opt/logstash-7.4.0/config/demo.yml
input {
beats {
port =>
5044
type =>
syslog
}
}
filter{
grok{
match =>
[ "message","%{TIMESTAMP_ISO8601:timestamp}" ]
}
date {
match =>
[ "timestamp", "yyyy-MM-dd HH:mm:ss" ]
locale =>
"en"
timezone =>
"+00:00"
target =>
"@timestamp"
}
mutate{
remove_field =>
["host"]
remove_field =>
["agent"]
remove_field =>
["ecs"]
remove_field =>
["tags"]
remove_field =>
["fields"]
remove_field =>
["@version"]
remove_field =>
["input"]
remove_field =>
["log"]
}
}
output {
elasticsearch {
hosts =>
["http://ES-node1:9200"]
user =>
"elastic"
password =>
"123456"
index =>
"test-log"
}
}
f.步骤6
chown -R els:els /opt/logstash-7.4.0
g.步骤7
cd /opt/logstash-7.4.0/bin/
nohup ./logstash &
logstash启动成功
部署完logstash,接下来部署kafka,作为日志的中转、存储。以下为部署kafka的步骤。(亲测可用)
a.步骤1
登陆kibana主节点、副节点、副节点。ssh 到 192.168.0.3、192.168.0.4、192.168.0.5
b.步骤2
vi/etc/hosts
192.168.0.3 kafka
192.168.0.4 ES-node1
192.168.0.5 ES-node2
192.168.0.6 ES-master
c.步骤3
cd /opt/
wget
https://mirror.bit.edu.cn/apache/kafka/2.6.0/kafka_2.12-2.6.0.tgz
tai -zxvf /kafka_2.12-2.6.0.tgz
d.步骤4
192.168.0.3配置如下:
vi/opt/kafka_2.12-2.6.0/config/zookeeper.properties
dataDir=/opt/kafka_2.12-2.6.0/zookeeper-data/data/
dataLogDir=/opt/kafka_2.12-2.6.0/zookeeper-data/logs
clientPort=2181
maxClientCnxns=0
initLimit=10
syncLimit=5
server.1=192.168.0.3:2888:3888
server.2=192.168.0.4:2888:3888
server.3=192.168.0.5:2888:3888
创建目录
mkdir-p/opt/kafka_2.12-2.6.0/zookeeper-data/data/
mkdir-p/opt/kafka_2.12-2.6.0/zookeeper-data/logs
192.168.0.4配置如下:
dataDir=/opt/kafka_2.12-2.6.0/zookeeper-data/data/
dataLogDir=/opt/kafka_2.12-2.6.0/zookeeper-data/logs
clientPort=2181
admin.enableServer=false
maxClientCnxns=0
initLimit=10
syncLimit=5
server.1=192.168.0.3:2888:3888
server.2=192.168.0.4:2888:3888
server.3=192.168.0.5:2888:3888
创建目录
mkdir-p/opt/kafka_2.12-2.6.0/zookeeper-data/data/
mkdir-p/opt/kafka_2.12-2.6.0/zookeeper-data/logs
192.168.0.5配置如下:
dataDir=/opt/kafka_2.12-2.6.0/zookeeper-data/data/
dataLogDir=/opt/kafka_2.12-2.6.0/zookeeper-data/logs
clientPort=2181
admin.enableServer=false
maxClientCnxns=0
initLimit=10
syncLimit=5
server.1=192.168.0.3:2888:3888
server.2=192.168.0.4:2888:3888
server.3=192.168.0.5:2888:3888
创建目录
mkdir-p/opt/kafka_2.12-2.6.0/zookeeper-data/data/
mkdir-p/opt/kafka_2.12-2.6.0/zookeeper-data/logs
需要修改的地方为ip地址不同
e.步骤5
192.168.0.3操作如下:
cd/opt/kafka_2.12-2.6.0/zookeeper-data/data/
echo “1” >
myid
192.168.0.4操作如下:
cd/opt/kafka_2.12-2.6.0/zookeeper-data/data/
echo “2” >
myid
192.168.0.5操作如下:
cd/opt/kafka_2.12-2.6.0/zookeeper-data/data/
echo “3” >
myid
f.步骤6
192.168.0.3操作如下:
cd/opt/kafka_2.12-2.6.0/
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
192.168.0.43操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
192.168.0.5操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/zookeeper-server-start.sh config/zookeeper.properties &
g.步骤7
192.168.0.3操作如下:
vi/opt/kafka_2.12-2.6.0/config/server.properties
broker.id=1
listeners=PLAINTEXT://192.168.0.3:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka_2.12-2.6.0/kafka-data/logs
num.partitions=9
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.3:2181,192.168.0.4:2181,192.168.0.5:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
192.168.0.4操作如下:
broker.id=2
listeners=PLAINTEXT://192.168.0.4:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka_2.12-2.6.0/kafka-data/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.3:2181,192.168.0.4:2181,192.168.0.5:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
192.168.0.5操作如下:
broker.id=3
listeners=PLAINTEXT://192.168.0.5:9092
num.network.threads=3
num.io.threads=8
socket.send.buffer.bytes=102400
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
log.dirs=/opt/kafka_2.12-2.6.0/kafka-data/logs
num.partitions=1
num.recovery.threads.per.data.dir=1
offsets.topic.replication.factor=1
transaction.state.log.replication.factor=1
transaction.state.log.min.isr=1
log.retention.hours=168
log.segment.bytes=1073741824
log.retention.check.interval.ms=300000
zookeeper.connect=192.168.0.3:2181,192.168.0.4:2181,192.168.0.5:2181
zookeeper.connection.timeout.ms=6000
group.initial.rebalance.delay.ms=0
主要修改地方为listeners与zookeeper.connect的IP
h.步骤8
192.168.0.3操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/kafka-server-start.sh config/server.properties &
192.168.0.4操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/kafka-server-start.sh config/server.properties &
192.168.0.4操作如下:
cd /opt/kafka_2.12-2.6.0/
nohup bin/kafka-server-start.sh config/server.properties &
i.步骤9
192.168.0.3操作如下:
bin/kafka-topics.sh—create—zookeeper node01:2181,node02:2181,node03:2181—replication-factor 2—partitions 3—topic test
创建成功即测试启动完毕
最后将部署filebeat到对应的主机上进行日志检控与收集。以下为部署filebeat的步骤。(亲测可用)
a.步骤1
登陆filebeat。ssh 到 192.168.0.2
b.步骤2
cd /opt/
wget
https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.4.0-linux-x86_64.tar.gz
c.步骤3
tai-zxvf./filebeat-7.4.0-linux-x86_64.tar.gz
d.步骤4
192.168.0.2配置如下:
vi/opt/filebeat-7.4.0-linux-x86_64/filebeat.yml
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/httpd/access_log
tags: ["C7-httpd-access_log"]
- type: log
enabled: true
paths:
- /var/log/httpd/error_log
tags: ["C7-httpd-error_log"]
filebeat.config.modules:
path: ${path.config}/modules.d/*.yml
reload.enabled: false
output.kafka:
enabled: true
hosts: ["192.168.0.3:9092"]
topic: "test"
required_acks: 1
processors:
- add_host_metadata: ~
- add_cloud_metadata: ~
e.步骤5
nohup/opt/filebeat-7.4.0-linux-x86_64/filebeat-e-c/opt/filebeat-7.4.0-linux-x86_64/filebeat.yml &
启动filebeat成功
到目前为止,企业级ELK项目是完整部署下来了。检验下是否有数据输出到kibana。
打开浏览器,登录http://192.168.0.6:5601/
账号与密码都是elastic
点击discover进入discover界面,就可以看到刚录取的数据
(温馨提示,本文档的ELK项目因各业务不同,给出的部署方案防火墙都处于关闭状态。具体防火墙配置按各自的业务需求自行配置哦~)
企业级ELK项目的部署已经告一段落,当前部署的ELK只需加大设备规模,足以应付绝大多数的业务需求。下篇文章中,小堂将分享一些部署ELK中踩过的坑、稍微深入点的原理、知识点、ELK的使用注意点等等。
推荐阅读
- SCCM2107集成MDT控制台无Create Boot image using MDT选项
- #yyds干货盘点#“愚公移山”的方法解atoi,自以为巧妙!
- 香港显卡服务器与国内显卡服务器的区别()
- 100行代码,轻松搞定文本编辑器中草稿箱
- #yyds干货盘点# PTA数据结构(C++版)——7-1 队列的实现及基本操作(链栈实现,无上限)
- xp系统电脑运行缓慢如何优化进程速度更快?
- XP系统还原失败是啥原因引起的问题?
- xp系统电脑回收站不能清理总是提示无法清空怎样处理?
- 安装系统 xp系统重装系统后感觉特别卡的多种原因?