EFK(elasticsearch + fluentd + kibana)日志系统-续2

人生必须的知识就是引人向光明方面的明灯。这篇文章主要讲述EFK(elasticsearch + fluentd + kibana)日志系统-续2相关的知识,希望能为你提供帮助。
1、Fluentd stopped sending data to ES:
??https://github.com/uken/fluent-plugin-elasticsearch#declined-logs-are-resubmitted-forever-why??

output.conf: |
# Enriches records with Kubernetes metadata
< filter kubernetes.**>
@type kubernetes_metadata
< /filter>
< match **>
@id elasticsearch
@type elasticsearch
@log_level info
include_tag_key true
type_name _doc
host "#ENV[OUTPUT_HOST]"
port "#ENV[OUTPUT_PORT]"
scheme "#ENV[OUTPUT_SCHEME]"
ssl_version "#ENV[OUTPUT_SSL_VERSION]"
logstash_format true
logstash_prefix "#ENV[LOGSTASH_PREFIX]"
reload_connections false
reconnect_on_error true
reload_on_failure true
slow_flush_log_threshold 25.0
< buffer>
@type file
path /var/log/fluentd-buffers/kubernetes.system.buffer
flush_mode interval
flush_interval 5s
flush_thread_count 4
chunk_full_threshold 0.9
# retry_forever
retry_type exponential_backoff
retry_timeout 1m
retry_max_interval 30
chunk_limit_size "#ENV[OUTPUT_BUFFER_CHUNK_LIMIT]"
queue_limit_length "#ENV[OUTPUT_BUFFER_QUEUE_LIMIT]"
overflow_action drop_oldest_chunk
< /buffer>
< /match>

fluentd daemonset failed to flush the buffer,fluent-plugin-elasticsearch reloads connection after 10000 requests. (Not correspond to events counts because ES plugin uses bulk API.)This functionality which is originated from elasticsearch-ruby gem is enabled by default.Sometimes this reloading functionality bothers(影响、干扰) users to send events with ES plugin.按以下修改match,buffer后OK。
reload_connections false# defaults to true
reconnect_on_error true# 默认false
reload_on_failure true# defaults to false

因为每个事件数据量通常很小,考虑数据传输效率、稳定性等方面的原因,所以基本不会每条事件处理完后都会立马写入到output端,因此fluentd建立了缓冲模型,模型中主要有两个概念:
  • buffer_chunk:事件缓冲块,用来存储本地已经处理完待发送至目的端的事件,可以设置每个块的大小。
  • buffer_queue:存储chunk的队列,可以设置长度。
【EFK(elasticsearch + fluentd + kibana)日志系统-续2】
当Elasticsearch不能在默认的5秒内返回批量请求的响应时,这个参数将非常有用。
request_timeout 15s # defaults to 5s


    推荐阅读