We are not receiving the logs from filebeat to kafka

Hi

We are using kafka 2.12 and configure the output as below in the filebeat.
service versions:

  1. logstash-6.3.1
  2. kafka_2.12-2.2.0
  3. zookeeper-3.4.14
  4. elasticsearch-6.3.1
  5. kibana-6.3.1
  6. filebeat-6.3.1

filebeat.prospectors:

type: log

enabled: true

paths:

/var/log/*.log

#============================= Filebeat modules ===============================

filebeat.config.modules:

Glob pattern for configuration loading

path: ${path.config}/modules.d/*.yml

Set to true to enable config reloading

reload.enabled: false

Period on which files under path should be checked for changes

#reload.period: 10s

#==================== Elasticsearch template setting ==========================

setup.template.settings:
index.number_of_shards: 3
#index.codec: best_compression
#_source.enabled: false

#-------------------------- Kafka Output ----------------------------------
output.kafka:

initial brokers for reading cluster metadata

hosts: ["kafka ip:9092"]

message topic selection + partitioning

topic: 'test'
partition.round_robin:
reachable_only: false

required_acks: 1
compression: gzip
max_message_bytes: 1000000

We have check the logs in the kafka there is no log on it. Can you please help to fix this issue.

@saravananveera, Kindly use </> to provide config file.

I want to ask you some question before give a answer

  1. What is the console debug log of filebeat?
  2. Your mentioned host is reachable from filebeat node or not? Do you have mentioned your host
    name in /etc/hosts?
  3. What is the output of below command ?
    # bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server <your kafka host ip/hostname>:9092 --topic test --from-beginning

Hi Debashis

Thanks for your response,
can you find the below information .

==================================================================

1. Filebeat Logs for your reference:

2019-04-29T07:56:11.063Z INFO kafka/log.go:36 producer/leader/test/0 abandoning broker 0

2019-04-29T07:56:11.063Z INFO kafka/log.go:36 producer/broker/0 shut down

2019-04-29T07:56:11.163Z INFO kafka/log.go:36 client/metadata fetching metadata for [test] from broker 10.11.12.159:9092

2019-04-29T07:56:11.165Z INFO kafka/log.go:36 producer/broker/0 starting up

2019-04-29T07:56:11.165Z INFO kafka/log.go:36 producer/broker/0 state change to [open] on test/0

2019-04-29T07:56:11.165Z INFO kafka/log.go:36 producer/leader/test/0 selected broker 0

2019-04-29T07:56:11.171Z INFO kafka/log.go:36 producer/broker/0 maximum request accumulated, waiting for space

2019-04-29T07:56:11.375Z INFO kafka/log.go:36 Failed to connect to broker kafka12.159:9092: dial tcp: lookup kafka12.159 on 8.8.8.8:53: no such host

2019-04-29T07:56:11.375Z INFO kafka/log.go:36 producer/broker/0 state change to [closing] because dial tcp: lookup kafka12.159 on 8.8.8.8:53: no such host

2019-04-29T07:56:11.377Z INFO kafka/log.go:36 producer/broker/0 state change to [closing] because dial tcp: lookup kafka12.159 on 8.8.8.8:53: no such host

===================================================================
2. we can able to reach kafka server from filebeat, Also, We didn't configured any host name in /etc/hosts file.

===================================================================
3. I hope this helps.

[root@kafka12 kafka]# bin/kafka-console-producer.sh --broker-list localhost:9092 --topic test

welcome to ELK

[root@kafka12 kafka]# bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic test --from-beginning
welcome to ELK

===================================================================
I went through an error message that the zookeeper is not recognized. So I have replaced the "--zookeeper" from your command

[root@kafka12 kafka]# bin/kafka-console-consumer.sh --zookeeper localhost:2181 --bootstrap-server localhost:9092 --topic test --from-beginning
zookeeper is not a recognized option

[root@kafka12 kafka]# bin/kafka-console-consumer.sh localhost:2181 --bootstrap-server localhost:9092 --topic test --from-beginning
welcome to ELK

@saravananveera,

From your shared filebeat log it is confirmed that your filebeat is unable to connect with the kafka for this reason no logs are getting forwarded to the kafka.

2019-04-29T07:56:11.375Z INFO kafka/log.go:36 Failed to connect to broker kafka12.159:9092: dial tcp: lookup kafka12.159 on 8.8.8.8:53: no such host

Hi @Debashis

Thanks you for your support, After enter the localhost entry (/etc/hosts)in kafka ip from filebeat server.it's working fine.

It's Good :+1: @saravananveera .

Kindly mark this discussion as "Solved"

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.