Filebeat send events to kafka repeatly

Filebeat version : 5.0.2
OS : CentOS7

Filebeat Config:

filebeat.prospectors:

  • input_type: log
    paths:
    • /terminallogs/jhw/step/step.log*
      document_type: 'jhw-terminal-steplog'
      multiline.pattern: '^['
      multiline.negate: true
      multiline.match: after
      ignore_older: 6h #must be greater than close_inactive.
      close_inactive: 5m
      clean_inactive: 7h #must be greater than ignore_older + scan_frequenc
      harvester_limit: 30 #in combination with the close_*
  • input_type: log
    paths:
    • /terminallogs/jhw/exception/exception.log*
      document_type: 'jhw-terminal-exceptionlog'
      multiline.pattern: '^['
      multiline.negate: true
      multiline.match: after
      ignore_older: 6h #must be greater than close_inactive.
      close_inactive: 5m
      clean_inactive: 7h #must be greater than ignore_older + scan_frequenc
      harvester_limit: 30 #in combination with the close_*

output.kafka:
hosts: ["10.18.207.121:9092","10.18.207.122:9092"]
topic: 'filebeat-%{[type]}'
version: 0.9.0.1
#required_acks: 1

The number of concurrent load-balanced Kafka output workers.

#worker: 3

logging.level: debug

Logs: /var/log/filebeat/filebeat

2016-11-30T20:49:23+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 state change to [retrying-17]

2016-11-30T20:49:23+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 abandoning broker 1

2016-11-30T20:49:23+08:00 WARN producer/broker/1 shut down

2016-11-30T20:49:23+08:00 WARN client/metadata fetching metadata for [filebeat-jhw-terminal-exceptionlog] from broker 10.18.207.121:9092

2016-11-30T20:49:23+08:00 WARN producer/broker/1 starting up

2016-11-30T20:49:23+08:00 WARN producer/broker/1 state change to [open] on filebeat-jhw-terminal-exceptionlog/0

2016-11-30T20:49:23+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 selected broker 1

2016-11-30T20:49:23+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 state change to [flushing-17]

2016-11-30T20:49:23+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 state change to [normal]

2016-11-30T20:49:23+08:00 WARN Connected to broker at 10.18.207.122:9092 (registered as #1)

2016-11-30T20:49:24+08:00 WARN producer/broker/1 state change to [closing] because read tcp 10.18.210.84:34445->10.18.207.122:9092: read: connection reset by peer

2016-11-30T20:49:24+08:00 WARN Closed connection to broker 10.18.207.122:9092

2016-11-30T20:49:24+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 state change to [retrying-18]

2016-11-30T20:49:24+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 abandoning broker 1

2016-11-30T20:49:24+08:00 WARN producer/broker/1 shut down

2016-11-30T20:49:24+08:00 WARN client/metadata fetching metadata for [filebeat-jhw-terminal-exceptionlog] from broker 10.18.207.121:9092

2016-11-30T20:49:24+08:00 WARN producer/broker/1 starting up

2016-11-30T20:49:24+08:00 WARN producer/broker/1 state change to [open] on filebeat-jhw-terminal-exceptionlog/0

2016-11-30T20:49:24+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 selected broker 1

2016-11-30T20:49:24+08:00 WARN producer/leader/filebeat-jhw-terminal-exceptionlog/0 state change to [flushing-18]

It seems like because of tcp connections to a kafka broker was reset by peer , filebeat keep retrying all the time , and indeed, kafka received lots repeatful events.

It turns out that because required_acks is default 1, which means wait for local commit, and in case that filebeat cannot get TCP ACK response, so filebeat keep retrying. Filebeat ignore the max_retries setting and retry until all events are published.

Well, the connection has been reset by peer. That is, being closed by kafka or on behalf of kafka (e.g. firewall). Filebeat uses send-at-least-once semantics and therefore has to retry.

Anything in kafka logs?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.