How to debug further? Logstash not reading Kafka input

Hey all, looking for some help on where to look next:

I'm playing with a pipeline that has filebeat push some log data onto a kafka topic, and then logstash read that and do some filtering and ultimately push to Elasticsearch, but I've hit a problem.

Upon initial deployment things seemed to be working. After starting filebeat on my appliance a slew of the old log entries was pushed and logstash read them from the kafka topic and outputted them (no filtering at this stage) to a file for debug. Then I went to start iterating over the logstash filtering, SIGTERM'd logstash, and since then I've been unable to get it to read anything off the topic after restarting.

So basically it all worked, I killed logstash, and now I can't get it to to read anything off the topic, even when its sat running and I generate additional log entries.

I can verify that everything is going into kafka because if I manually spin up a console consumer I can then see the newer log file entries...

Versions:
Filebeat: 5.2
Logstash: 5.2
Kafka: 0.10.0.1

I'm not seeing kafka or Logstash log anything untoward, even when increasing the logging... I'm looking for any guidance anyone might have or suggestions on settings to tweak. Should I implement some group configuration on kafka (I haven't yet; just a simple topic). I've tried changing the auto_offset_reset variable in logstash for it's kafka input and not had any luck.

Would welcome any suggestions for things to check... i feel like I'm missing something obvious.

Thanks,
Greg

I would look at installing kafka manager to see the consumer groups you have defined but you did check that there is new documents coming in so that is good.

my guess is that your consumer is not working correctly or configured right.

1 you could define a "stdout" output to just make sure you can see something . Make sure you use a different consumer group so you don't loose your existing place (unless you don't care)

Second, check your logs for errors

finally provide the config of your logstash so we can see what you have configured.

Thanks for the reply Ed.

I've toyed with the consumer settings in Logstash, my conf currently looks like this:
input {
kafka {
bootstrap_servers => ["localhost:9092"]
topics => "test"
auto_offset_reset => "latest"
}
}

I've tried stdout for the output but didn't get anything; right now my logstash output looks like this:
output {
#stdout { codec => json }
file {
path => "/var/log/gregtest.log"
codec => "json"
}
}

The file hasn't been touched since I SIGTERM'd logstash when it was working.

As for looking at the logs for errors... that's where I need help. My logstash log is filled with:
[2017-03-18T09:24:38,898][INFO ][logstash.pipeline ] Starting pipeline {"id"=>"main", "pipeline.workers"=>12, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>5, "pipeline.max_inflight"=>1500}
[2017-03-18T09:24:38,918][INFO ][logstash.pipeline ] Pipeline main started
[2017-03-18T09:24:38,960][INFO ][logstash.agent ] Successfully started Logstash API endpoint {:port=>9600}

Then nothing else, until I restart it.

If there's something obviously wrong with the config I'd love to know what but I can't even find breadcrumbs in any of the log files to chase down so I"m at a loss... any further help really appreciated!

Thanks,
Greg

Hi Greg,

I did the same setup a few days ago and it seemed to work fine. I posted my notes here if you would like to take a look -> https://github.com/jaijhala/Kafka-with-ELK
Hope it helps.

Regards,
Jai

Thanks Jai;

SOLVED

our configurations were eerily similar, but there was one difference that got me thinking about something; I'm actually using multiple devices.

( APPLIANCE, running Filebeats ) -> (INGEST, running Kafka and Logstash) -> (ELASTICSEARCH...)

When I deployed Filebeats against my Ingest server and it's Kafka instance I had an initial error because the discovery forced Kafka to advertise it's hostname but Filebeats couldn't do a DNS lookup in my test environment. The DNS look up seemed to be due to a default setting that gets the hostname in Kafka. So I changed the server.properties file in Kafka for it to advertise on it's IP and this allowed filebeat to hook into kafka and work.

When I was looking at your logstash conf file it got me thinking that maybe discovery wasn't working because "localhost:9092" wasn't resolving kafka cleanly; I quickly changed the bootstrap server setting in my Logstash conf to point to the Ingest servers IP that kafka was set to advertise on instead of the loopback interface and it's all back up and running.

A couple of points on reflection:

  • This was initially working, but stopped working on restart. I'd love to know why. Now that we know the fix perhaps someone can shed some light?
  • Based on what I observed is there any logging we can get in Logstash that might make it clear it's not properly doing something - the only difference in the logs that I see now is that the file output filter indicates that it's opening a file for output...

Thanks to everyone that responded.
Greg

1 Like

Hi Greg,

Glad to know it's working now.
Regarding the logging in Logstash, have you tried enabling debug logs?
https://www.elastic.co/guide/en/logstash/current/logging.html

You could try the same problem you were hitting with debugging ON, that might give more clues.

--Jai

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.