Single unavailable Kafka topic blocks delivery to other available topics

filebeat version 5.4.1 (amd64), libbeat 5.4.1 on CentOs 7
filebeat.prospectors:
- input_type: log
backoff: 200ms
max_backoff: 500ms
scan_frequency: 2s
paths:
- /logs/login-events.log
document_type: name
fields:
instance: auth-service-1
log_type: auth_login_events
topic: topic_name
include_lines: ['.*']
json.message_key: uniqueKey
json.keys_under_root: true
json.overwrite_keys: true

I have filebeat delivering messages from 2 files to 2 different Kafka topics accordingly
If for any reason one of Kafka topic is unavailable, filebeat stops delivery to the existing topic as well.
Instead it proactively tries to re-connect to the unavailable topic.

Interesting that if I restart filebeat it will send to the existing topic all the messages available since the last restart, but will ignore all new messages

Do I miss some configuration? Though to me it looks more like a bug.

Filebeat maintains an internal event queue. The size of the queue is bounded. Once the queue is full, no more events can be read. Events must be ACKed in order, so to guarantee correct state updates. One failing topic can block the queue, as the queue will run full.

As the queue is a shared resource, you have some indirect coupling between these topics in filebeat. For full independent processing you will need 2 filebeat processes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.