Hello,
On our kerberized cloudera hadoop cluster I tried to set up metricbeat kafka module to collect metricsets consumergroup and partitions.
I set up the following kafka.yml
- module: kafka
period: 10s
metricsets:
- consumergroup
- partition
hosts: ["localhost:9092"]
enabled: true
topics: []
If I mention hosts: ["localhost:9092"], I end up with a "connection refused".
If I specify the ip of the broker, I end up with an ERROR " error.message: broker connection failed: EOF".
As an illustration, below is the config we used to (successfully) consume data from kafka in logstash:
input {
kafka {
jaas_path => "/etc/logstash/conf.d/kafka_spark_jaas.conf"
sasl_kerberos_service_name => "kafka"
security_protocol => "SASL_PLAINTEXT"
kerberos_config => "/etc/krb5.conf"
auto_offset_reset => "earliest"
topics => ["topicName"]
codec => "json"
bootstrap_servers => "xxx.xxx.xxx.xxx:9092,yyy.yyy.yyy.yyy:9092,zzz.zzz.zzz.zzz:9092"
}
}
So my question is : can we set the same properties to collect kafka metrics in metricbeat kafka module (jaas file, sasl_kerberos_service_name)?