Filebeat log gets flooded if no Kafka topic is available

Hi,

I have an issue where Filebeat tries to send data to a Kafka topic that doesn't exist anymore. The Filebeat logs are flooded with:

2022-09-19T13:47:29.111Z        INFO    [publisher]     pipeline/retry.go:223     done
2022-09-19T13:47:29.111Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2022-09-19T13:47:29.111Z        INFO    [publisher]     pipeline/retry.go:223     done
2022-09-19T13:47:29.112Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2022-09-19T13:47:29.112Z        INFO    [publisher]     pipeline/retry.go:223     done
2022-09-19T13:47:29.112Z        INFO    [publisher]     pipeline/retry.go:219   retryer: send unwait signal to consumer
2022-09-19T13:47:29.112Z        INFO    [publisher]     pipeline/retry.go:223     done

Filebeat writes 16 lines per millisecond to the logs and rotates out every useful info in a blink of an eye. My FB configuration is:

  hosts: ["...", "...", "..."]

  topic: "default_topic"
  topics:
    - topic: '%{[topic_name]}'
      when.has_fields: ['topic_name']

  partition.round_robin:
    reachable_only: true

  ssl.certificate_authorities: ["..."]
  username: "..."
  password: "..."

  required_acks: -1
  compression: snappy

Is there a way to prevent this behavior? Unfortunately, it doesn't fall back to the "topic" due to the infinite loop.

Thanks!

Filebeat version: 7.9.2
Kafka version: 3.2.1

Should I raise this question on GitHub?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.