首页 文章

如何调试为什么Fluentd不向Elasticsearch发送数据?

提问于
浏览
2

启动Fluentd docker容器时有0条错误消息,因此很难调试 .

来自流利容器的curl http://elasticsearch:9200/_cat/indices显示索引,但是没有显示流利的索引 .

docker logs 7b
2018-06-29 13:56:41 +0000 [info]: reading config file path="/fluentd/etc/fluent.conf"
2018-06-29 13:56:41 +0000 [info]: starting fluentd-0.12.19
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-elasticsearch' version '1.4.0'
2018-06-29 13:56:41 +0000 [info]: gem 'fluent-plugin-rename-key' version '0.1.3'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.12.19'
2018-06-29 13:56:41 +0000 [info]: gem 'fluentd' version '0.10.61'
2018-06-29 13:56:41 +0000 [info]: adding filter pattern="**" type="record_transformer"
2018-06-29 13:56:41 +0000 [info]: adding match pattern="docker.*" type="rename_key"
2018-06-29 13:56:41 +0000 [info]: Added rename key rule: rename_rule1 {:key_regexp=>/^log$/, :new_key=>"message"}
2018-06-29 13:56:41 +0000 [info]: adding match pattern="**" type="elasticsearch"
2018-06-29 13:56:41 +0000 [info]: adding source type="forward"
2018-06-29 13:56:41 +0000 [info]: adding source type="monitor_agent"
2018-06-29 13:56:41 +0000 [info]: using configuration file: <ROOT>
  <source>
    @type forward
  </source>
  <source>
    @type monitor_agent
    bind 0.0.0.0
    port 24220
  </source>
  <filter **>
    type record_transformer
    <record>
      node /
      role app
      environment dev
      tenant xxx
      tag ${tag}
    </record>
  </filter>
  <match docker.*>
    type rename_key
    rename_rule1 ^log$ message
    append_tag message
  </match>
  <match **>
    type elasticsearch
    host elasticsearch
    port 9200
    index_name fluentd
    type_name fluentd
    include_tag_key true
    logstash_format true
  </match>
</ROOT>
2018-06-29 13:56:41 +0000 [info]: listening fluent socket on 0.0.0.0:24224
...
2018-06-29 14:16:38 +0000 [info]: listening fluent socket on 0.0.0.0:24224
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=49
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=50
2018-06-29 14:20:56 +0000 [warn]: incoming chunk is broken: source="host: 172.18.42.1, addr: 172.18.42.1, port: 48704" msg=51
... many repeats
2018-07-01 06:21:52 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 08:39:07 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 06:21:52 +0000 [warn]: suppressed same stacktrace
2018-07-01 08:39:07 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 13:02:17 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 08:39:07 +0000 [warn]: suppressed same stacktrace
2018-07-01 13:02:17 +0000 [warn]: temporarily failed to flush the buffer. next_retry=2018-07-01 21:04:48 +0000 error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
  2018-07-01 13:02:17 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [warn]: failed to flush the buffer. error_class="MultiJson::ParseError" error="Yajl::ParseError" plugin_id="object:2ac58fef2200"
2018-07-01 21:04:48 +0000 [warn]: retry count exceededs limit.
  2018-07-01 21:04:48 +0000 [warn]: suppressed same stacktrace
2018-07-01 21:04:48 +0000 [error]: throwing away old logs.

我能够通过卷曲成功地在ElasticSearch中的测试索引中插入数据 . 如何解决流利者失败的问题?

2 回答

  • 1

    我无法评论,所以在这里添加了几个观察 .

    文档说使用 @type elasticsearch . 此外,如果弹性和流畅的同时作为docker容器运行,请确保使用适当的网络运行它们,以便它们可以相互通信(可能首先尝试IP) .

    另外,你的Dockerfile看起来是什么样的,所以我们可以将详细程度传递给流畅的命令?

  • 2

    我成功地将这种配置用于流利的弹性体:

    </source>
      @type      forward
      @label     @mainstream
      bind       0.0.0.0
      port       24224
    </source>
    
    <label @mainstream>
      <match **>
        @type copy
    
        <store>
          @type               elasticsearch
          host                elasticsearch
          port                9200
          logstash_format     true
          logstash_prefix     fluentd
          logstash_dateformat %Y%m%d
          include_tag_key     true
          type_name           access_log
          tag_key             @log_name
          <buffer>
            flush_mode            interval
            flush_interval        1s
            retry_type            exponential_backoff
            flush_thread_count    2
            retry_forever         true
            retry_max_interval    30
            chunk_limit_size      2M
            queue_limit_length    8
            overflow_action       block
          </buffer>
        </store>
    
      </match>
    </label>
    

    对于调试,您可以使用 tcpdump

    sudo tcpdump -i eth0 tcp port 24224 -X -s 0 -nn
    

相关问题