业无高卑志当坚,男儿有求安得闲?这篇文章主要讲述ELK收集tomcat和nginx日志(分别用了filebeat和logstash收集)相关的知识,希望能为你提供帮助。
es所有需要下载的包地址(根据不同的版本选择,这里是7的版本)
??https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.3.0/??
# 下载安装包
wget ??https://mirrors.huaweicloud.com/elasticsearch/7.3.0/elasticsearch-7.3.0-linux-x86_64.tar.gz??
tar -xzf elasticsearch-7.3.0-linux-x86_64.tar.gz -C
/data/elasticsearch
分别设置节点对应修改network.host和node.name
network.host修改为本机ip
node.name修改为node-1,node-2,node-3
cat /etc/hosts
127.0.0.1
localhost localhost.localdomain localhost4 localhost4.localdomain4
::1
localhost localhost.localdomain localhost6 localhost6.localdomain6
10.168.104.201 m1
10.168.104.202 node-5
10.168.104.203 node-4
10.168.104.204 node-3
10.168.104.205 node-2
10.168.104.206 node-1
hostnamectl set-hostname node-1
hostnamectl set-hostname node-2
hostnamectl set-hostname node-3
hostnamectl set-hostname node-4
hostnamectl set-hostname node-5
vim /etc/security/limits.conf
* soft nofile 65536
* hard nofile
65536
* soft nproc 65536
* hard nproc 65536
创建elasticsearch用户及授权
groupadd elsearch
useradd elsearch -g elsearch -p elasticsearch
mkdir /data/elasticsearch/data,log -p
chown -R elsearch:elsearch elasticsearch*
分别配置每个节点(注意修改里面IP地址,节点名称)
配置vi /data/elasticsearch-7.3.0/config/elasticsearch.yml
#集群名称
cluster.name: elasticsearch
#节点名称
node.name: node-1
#是不是有资格竞选主节点
node.master: true
#是否存储数据
node.data: true
#最大集群节点数
node.max_local_storage_nodes: 5
#ip地址
network.host:
10.168.104.206
#端口
http.port: 9200
#内部节点之间沟通端口
transport.tcp.port: 9300
#es7.x 之后新增的配置,写入候选主节点的设备地址,在开启服务后可以被选为主节点
discovery.seed_hosts: ["10.168.104.206:9300", "10.168.104.205:9300", "10.168.104.204:9300", "10.168.104.203:9300",
"10.168.104.202:9300"]
#es7.x 之后新增的配置,初始化一个新的集群时需要此配置来选举master
cluster.initial_master_nodes: ["10.168.104.206", "10.168.104.205", "10.168.104.204", "10.168.104.203",
"10.168.104.202"]
# ping超时时长,默认3S,适当修改,防止脑裂
discovery.zen.ping_timeout: 120s
client.transport.ping_timeout: 60s
#数据存储路径
path.data:
/data/elasticsearch/data
#日志存储路径
path.logs:
/data/elasticsearch/log
bootstrap.system_call_filter: false
http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-methods: OPTIONS, HEAD, GET, POST, PUT, DELETE
http.cors.allow-headers: "X-Requested-With, Content-Type, Content-Length, X-User"
启动elasticsearch
su elsearch
cd /data/elasticsearch-7.3.0/bin/
./elasticsearch -d
报错解决
OpenJDK 64-Bit Server VM warning: Option UseConcMarkSweepGC was deprecated in version 9.0 and will likely be removed in a future release.
修改jvm.options文件配置vi /data/elasticsearch-7.3.0/config/jvm.options
-XX:+UseConcMarkSweepGC 改为 -XX:+UseG1GC
[1] bootstrap checks failed
[1]: memory locking requested for elasticsearch process but memory is not locked
# vim /etc/elasticsearch/elasticsearch.yml
// 设置成false就正常运行了。
bootstrap.memory_lock: false
max virtual memory areas vm.max_map_count [65530] is too low, increase to at least [262144]
编辑 vi /etc/sysctl.conf,追加以下内容:
vm.max_map_count=655360
保存后,执行:
sysctl -p
ElasticSearch7.x—head插件安装
es授权
修改es配置,加入如下内容
http.cors.enabled: true
http.cors.allow-origin: "*"
作用是开启HTTP对外提供服务,使Head插件能够访问Elasticsearch,修改完成后需要重启es。
一、下载 elasticsearch-head-master.zip
??https://codeload.github.com/mobz/elasticsearch-head/tar.gz/v5.0.0??
解压并进入目录
二、下载node.js
elasticsearch-head-master]# curl --silent --location ??https://rpm.nodesource.com/setup_10.x?? | bash -
elasticsearch-head-master]# yum install -y nodejs
查看是否下载成功(这里版本会有不同)
elasticsearch-head-master]# node -v
v10.16.0
elasticsearch-head-master]# npm -v
6.9.0
三、安装grunt
elasticsearch-head-master]# npm install -g grunt-cli
elasticsearch-head-master]# npm install
四、修改head配置
elasticsearch-head-master]# vim Gruntfile.js,添加hostname: 10.168.104.206
server:
options:
hostname: 10.168.104.206,
port: 9100,
base: .,
keepalive: true
elasticsearch-head-master]# vim _site/app.js,将this.prefs.get("app-base_uri") || "10.168.104.206:9200",修改如下
this._super();
this.prefs = services.Preferences.instance();
this.base_uri = this.config.base_uri || this.prefs.get("app-base_uri") || "??http://192.168.25.180:9200";
??
六、启动Head插件
切换到elasticsearch-head-master目录下,运行启动命令:
grunt server
安装kibana-7.X
wget ??https://artifacts.elastic.co/downloads/kibana/kibana-7.6.1-linux-x86_64.tar.gz??
tar -zxvf kibana-7.6.1-linux-x86_64.tar.gz
修改kibana配置文件kibana.yml
vim config/kibana.yml
# 放开注释,将默认配置改成如下:
server.port: 5601
server.host: "0.0.0.0"
elasticsearch.hosts: ["??http://10.168.104.206:9200??", "??http://10.168.104.205:9200??", "??http://10.168.104.204:9200??", "??http://10.168.104.203:9200??", "??http://10.168.104.202:9200??"]
server.name: "kib-server" #随意
i18n.locale: "zh-CN" #汉化
useradd kibana
chown -R kibana:kibana /data/kibana
su kibana
cd
/data/kibana/bin
./kibana &
//启动应用(后台)
安装filebeat
wget ??https://mirrors.tuna.tsinghua.edu.cn/elasticstack/7.x/yum/7.3.0/filebeat-7.3.0-x86_64.rpm??
rpm -ivh filebeat-7.3.0-x86_64.rpm
【ELK收集tomcat和nginx日志(分别用了filebeat和logstash收集)】启动
systemctl start filebeat
ELK收集nginx的json日志
1、将nginx中的日志以json格式记录
2、filebeat采的时候说明是json格式
3、传入es的日志为json,那么显示在kibana的格式也是json,便于日志管理
1、配置nginx的日志以json格式记录
#修改/etc/nginx/nginx.conf配置文件,加入以下内容,yml文件注意缩进
log_format
json"time_local": "$time_local",
"remote_addr": "$remote_addr",
"referer": "$http_referer",
"request": "$request",
"status": $status,
"bytes": $body_bytes_sent,
"agent": "$http_user_agent",
"x_forwarded": "$http_x_forwarded_for",
"up_addr": "$upstream_addr",
"up_host": "$upstream_http_host",
"upstream_time": "$upstream_response_time",
"request_time": "$request_time"
;
access_log
/var/log/nginx/access.log
json;
#重启nginx服务
systemctl restart nginx.service
#再次进行压测&
&
查看nginx日志是否记录显示为json格式的键值对&
&
查看可知已是json格式
ab -n 100 -c 100 ??http://10.20.1.114/??
tail -f /var/log/nginx/access.log
ELK收集Tomcat日志
#修改tomcat日志为json格式
vim /etc/tomcat/server.xml
##删除第139行
139
pattern="%h %l %u %t &
quot;
%r&
quot;
%s %b" />
##将以下配置放入到139行
pattern="&
quot;
clientip&
quot;
:&
quot;
%h&
quot;
,&
quot;
ClientUser&
quot;
:&
quot;
%l&
quot;
,&
quot;
authenticated&
quot;
:&
quot;
%u&
quot;
,&
quot;
AccessTime&
quot;
:&
quot;
%t&
quot;
,&
quot;
method&
quot;
:&
quot;
%r&
quot;
,&
quot;
status&
quot;
:&
quot;
%s&
quot;
,&
quot;
SendBytes&
quot;
:&
quot;
%b&
quot;
,&
quot;
Query?string&
quot;
:&
quot;
%q&
quot;
,&
quot;
partn
er&
quot;
:&
quot;
%Refereri&
quot;
,&
quot;
AgentVersion&
quot;
:&
quot;
%User-Agenti&
quot;
"/>
##保存退出&
&
重启服务&
&
查看日志
systemctl restart tomcat
tail -f /var/log/tomcat/localhost_access_log.2020-09-05.txt
filebeat直接给es传输日志,自定义索引名,自定义多个索引文件
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/nginx/access.log
fields:
type: nginx
- type: log
enabled: true
paths:
- /var/log/tomcat/localhost_access_log.*.txt
fields:
type: tomcat
setup.ilm.enabled: false
setup.template.settings:
index.number_of_shards: 5
index.number_of_replicas: 1
index.codec: best_compression
json.keys_under_root: true
json.overwrite_keys: true
output.elasticsearch:
hosts: ["10.168.104.206:9200", "10.168.104.205:9200", "10.168.104.204:9200", "10.168.104.203:9200", "10.168.104.202:9200"]
indices:
- index: "nginx_%+yyyy.MM.dd"
when.equals:
fields.type: "nginx"
- index: "tomcat_%+yyyy.MM.dd"
when.equals:
fields.type: "tomcat"
安装logstash 收集日志
[root@redis conf.d]# cat nginx_log.conf
input
file
path
=>
["/usr/local/nginx/logs/access.log"]
start_position =>
"beginning"
type =>
"access"
file
path =>
["/usr/local/nginx/logs/error.log"]
start_position =>
"beginning"
type =>
"error"
output
if [type] == "access"
elasticsearch
hosts =>
["192.168.10.128:9200"]
index =>
"nginx_access-%+YYYY.MM.dd"
if [type] == "error"
elasticsearch
hosts =>
["192.168.10.128:9200"]
index =>
"nginx_error-%+YYYY.MM.dd"
Logstash 这个命令测试
字段描述解释:
-f
通过这个选项可以指定logstash的配置文件,根据配置文件配置logstash
-e
后面跟着字符串 该字符串可以被当做logstash的配置(如果是” ”,则默认使用stdin做为输入、stdout作为输出)
-t
测试配置文件是否正确,然后退出
[root@redis conf.d]# /usr/share/logstash/bin/logstash -f /etc/logstash/conf.d/nginx_log.conf -t
Sending Logstashs logs to /var/log/logstash which is now configured via log4j2.properties
Configuration OK
推荐阅读
- 万字长文详解HiveSQL执行计划
- Linux的目录结构
- 库客音乐2021年财报(拥抱素质教育新蓝海)
- RHEL 7 使用 CentOS 源安装 docker ce
- redis主从
- IOS技术分享| ARCallPlus 开源项目
- Kubernetes的基础概念
- Win7提示TAP-Windows adapters on this system are currently in use的解决方法
- 在线JSON转XML工具