I have this pipeline:
Apache Webserver + Filebeats -> Logstash1 (no filters) -> Kafka -> Logstash2 (Filters) -> ES + Kibana
Via my Kafka consumer, I can see the logs going to the Kafka channel. And in Kibana I can see the heart beat being indexed in ES. But for some reason, there is no longer any logs going Kafka->LS->ES. I look at the Logstash2's log file and there are no errors, and the config is ok. I have removed all the filters and the ips and ports are good. I created a new channel in Kafka and temporarily had logs arriving in ES but then it stopped again (the Heartbeats continued).
In the second Logstash instance, replace the elasticsearch output with a simple stdout { codec => rubydebug } output so that you can see exactly what happens. Does it help? Increase the Logstash loglevel by starting it with --verbose or even --debug. Do you get any useful clues? Can you see if Logstash connects to Kafka? Et cetera.
Ok, so I moved my groks from Ls1 to Ls2, but the mistake being that I didn't factor in that it was grok'ing on a modified bunch of text. In my troubleshooting, I opened a Kafka producer console and noted that the text was ending up in ES, which alluded to an issue with the grok. Wish I figured that out sooner!
Thanks for your reply, I appreciate you trying to help me.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.