filebeat version 5.4.1 (amd64), libbeat 5.4.1 on CentOs 7
filebeat.prospectors:
- input_type: log
backoff: 200ms
max_backoff: 500ms
scan_frequency: 2s
paths:
- /logs/login-events.log
document_type: name
fields:
instance: auth-service-1
log_type: auth_login_events
topic: topic_name
include_lines: ['.*']
json.message_key: uniqueKey
json.keys_under_root: true
json.overwrite_keys: true
I have filebeat delivering messages from 2 files to 2 different Kafka topics accordingly
If for any reason one of Kafka topic is unavailable, filebeat stops delivery to the existing topic as well.
Instead it proactively tries to re-connect to the unavailable topic.
Interesting that if I restart filebeat it will send to the existing topic all the messages available since the last restart, but will ignore all new messages
Do I miss some configuration? Though to me it looks more like a bug.