Kafka monitoring in kubernetes

Autodiscovery works for k8s related objects. But does it work for kafka too?

we deployed kafka on kubernetes with 3 brokers, and I deployed metricbeat with kafka module enabled. like below:

apiVersion: v1
kind: ConfigMap
metadata:
  name: metricbeat-deployment-modules
  namespace: kafka
  labels:
    k8s-app: metricbeat
data:
  kafka.yml: |-
    # Kafka metrics collected using the Kafka protocol
    - module: kafka
      # metricsets: ["partition","consumergroup"]
      period: 10s
      hosts: ["pipeline-kafka:9092"]

Its throwing me below errors:

Error fetching data for metricset kafka.broker: error making http request: Post "http://pipeline-kafka:9092/jolokia/%3FignoreErrors=true&canonicalNaming=false": read tcp 10.5.7.136:34466->172.20.79.178:9092: read: connection reset by peer
Error fetching data for metricset kafka.producer: error making http request: Post "http://pipeline-kafka:9092/jolokia/%3FignoreErrors=true&canonicalNaming=false": read tcp 10.5.7.136:34470->172.20.79.178:9092: read: connection reset by peer
Error fetching data for metricset kafka.partition: error in connect: No advertised broker with address pipeline-kafka:9092 found

telnet to 9092 works fine:

[root@ip-10-5-7-136 metricbeat]# telnet pipeline-kafka 9092
Trying 172.20.79.178...
Connected to pipeline-kafka.
Escape character is '^]'.