Filebeat is unable to send logs into kafka

Earlier we were using Filebeat to logstash setup for logs but to make sure no log is lost during down time so we thought to use kafka in between of logstash and filebeat so that kafka would save logs incase of down time.

Earlier filebeat to logstash was working fine and To make this new setup(filebeat -> kafka -> logstash) working I have done below changes in config.

filebeat.yml

    output.kafka:
      # initial brokers for reading cluster metadata
      hosts: [ "192.168.0.207:9092" ]
      # message topic selection + partitioning
      topic: "filebeat"
      partition.round_robin:
        reachable_only: false
      required_acks: 1
      compression: gzip
      max_message_bytes: 1000000

Logstash pipeline configuration.

input {

    kafka {
    bootstrap_servers => "192.168.0.207:9092"
    topics => ["filebeat"]
    codec => json
  }
}

When I started setup and filebeat is running in kubernetes. I can see below logs in filebeat logs.

Connection to (192.168.0.207:9092) stablished

I did not find any logs coming in kafka though topic is created in kafka and can not see any data in elasticsearch.

Please help me in this case ?

Can you try to set the option auto_offset_reset => "earliest" in your kafka input config and see if it picks up the messages?

Also, can you confirm that there are messages in your Kafka topic using the kafka tools ?

Hey @leandrojmp,

I tried adding auto_offset_reset => "earliest" but can not see logs coming from filebeat to kafka.

I tested kafka to logstash setup by sending message from producer command line then I could see message in Kibana(kafka -> logstash -> elasticsearch <- kibana).

I think there is problem with Filebeat to kafka setup only. Filebeat is unable to send logs to kafka.

Please find few observation in filebeat log.
Kafka Connection established

2020-10-12T04:24:43.376Z	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to kafka(192.168.0.207:9092)
2020-10-12T04:24:43.376Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to kafka(192.168.0.207:9092) established

Some ingest related warning I also found in filebeat logs .

020-10-12T04:24:43.230Z	INFO	[monitoring]	log/log.go:118	Starting metrics logging every 30s
2020-10-12T04:24:43.231Z	INFO	instance/beat.go:450	filebeat start running.
2020-10-12T04:24:43.231Z	INFO	memlog/store.go:119	Loading data file of '/usr/share/filebeat/data/registry/filebeat' succeeded. Active transaction id=0
2020-10-12T04:24:43.231Z	INFO	memlog/store.go:124	Finished loading transaction log file for '/usr/share/filebeat/data/registry/filebeat'. Active transaction id=2
2020-10-12T04:24:43.231Z	WARN	beater/filebeat.go:381	Filebeat is unable to load the Ingest Node pipelines for the configured modules because the Elasticsearch output is not configured/enabled. If you have already loaded the Ingest Node pipelines or are using Logstash pipelines, you can ignore this warning.

Please help me as my setup is blocked on this

I have found one more observation.

I was doing my setup in k8s cluster. I thought to test setup of filebeat in my local instance.When I setup filebeat in my local then could see data flow to kibana via kafka.

It means when I installed same setup in k8s cluster then It did not work but working in local machine filebeat setup.

After that I accessed filebeat daemon pod and tried to telnet kafka host 192.168.0.207:9092 then could not connect to port. It means Unable to access Kafka port via filebeat pod.

I have one confusion here If I am unable to connect to kafka port via telnet then why I am getting connection established in Filebeat logs.

2020-10-12T04:24:43.376Z	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to kafka(192.168.0.207:9092)
2020-10-12T04:24:43.376Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to kafka(192.168.0.207:9092) established

Please help me on this

I think There is bug in filebeat. I am giving any random Kafka host in filebeat output and it's showing connection established in filebeat logs.
filebeat.yml

    output.kafka:
      # initial brokers for reading cluster metadata
      hosts: [ "192.168.0.209:9092" ]
      # message topic selection + partitioning
      topic: "filebeat"
      partition.round_robin:
        reachable_only: false
      required_acks: 1
      compression: gzip
      max_message_bytes: 1000000
      auto_offset_reset: earliest

Filebeat logs

2020-10-12T05:32:09.709Z	INFO	[publisher_pipeline_output]	pipeline/output.go:143	Connecting to kafka(192.168.0.209:9092)
2020-10-12T05:32:09.709Z	INFO	[publisher_pipeline_output]	pipeline/output.go:151	Connection to kafka(192.168.0.209:9092) established

There is no kafka host 192.168.0.209:9092 .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.