WARN kafka message: client/metadata found some partitions to be leaderless

Hi All,
I am unable to post a message from filebeat as an external producer (outside k8s cluster) to kafka-inside kubernetes cluster. The topic "test_topic" is already created.
Error from filebeat:

2017/12/27 09:26:03.225021 log.go:16: WARN kafka message: Initializing new client
2017/12/27 09:26:03.225076 log.go:12: WARN client/metadata fetching metadata for all topics from broker 10.19.166.10:30094
{
"@timestamp": "2017-12-27T09:26:00.224Z",
"beat": {
"hostname": "",
"name": "",
"version": "5.5.1"
},
"input_type": "log",
"message": "test message",
"offset": 15,
"source": "/root/fb/input_logs/art.log",
"type": "log"
}
2017/12/27 09:26:03.264764 log.go:12: WARN Connected to broker at 10.19.166.10:30094 (unregistered)
2017/12/27 09:26:03.265964 log.go:12: WARN client/brokers registered new broker #0 at 192.168.1.13:9094
2017/12/27 09:26:03.266037 log.go:16: WARN kafka message: Successfully initialized new client
2017/12/27 09:26:03.266198 log.go:12: WARN client/metadata fetching metadata for [test_topic] from broker 10.19.166.10:30094
2017/12/27 09:26:03.266995 log.go:16: WARN kafka message: client/metadata found some partitions to be leaderless
2017/12/27 09:26:03.267009 log.go:12: WARN client/metadata retrying after 250ms... (3 attempts remaining)

Debugging steps followed:
Step1:

kubectl exec kafka-0 -n elk -c kafka curl 10.19.166.10:30094
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14 0 14 0 0 3879 0 --:--:-- --:--:-- --:--:-- 4666

Step2:

kubectl exec kafka-0 -n elk -c kafka curl 192.168.1.13:9094
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 14 0 14 0 0 9575 0 --:--:-- --:--:-- --:--:-- 14000

Step3:
kafka - server.properties

listeners=SASL_PLAINTEXT://192.168.1.13:9092,PLAINTEXT://192.168.1.13:9093,SSL://192.168.1.13:9094

Step4:

kubectl get ep -n elk
kafka-headless-external 192.168.1.13:9094
kubectl get svc -n elk -o wide
kafka-headless-external 172.31.38.80 9094:30094/TCP

Sounds more like a problem with your kafka setup, not beats.

In kafka topics are split into partitions. Normally each broker handles/stores a subset of partitions for a given topic. That is, each partition requires one active 'leader' and multiple 'replica' broker. The leader is responsible handling events from publisher and for updating the replicas. Every now and then a new leader is selected. If no leader is available, no events can be processed for said partition.

By default the kafka output blocks if some partition can not be handled. This can be disabled by setting partition.round_robin.reachable_only: false in the kafka output (see sample in docs: https://www.elastic.co/guide/en/beats/filebeat/current/kafka-output.html#kafka-output). It's not really recommende to disable this setting, as in worst case all events might be published to one partition only -> no more load balancing on publisher, disk and consumers (as only one consumer can read from a partition).

Thanks for the explanination....Actually that problem is solved now with the change by altering topic partition count from 3 to 1, Not sure with default partitioncount of 3 why it's not worked.

Uhm... you still have replication enabled?

Are your kafka brokers correctly interconnected to form a cluster?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.