I'm currently seeing my Fluentd instances fail to send logs to LogStash with multiple http error 429s. Once this happens, I no longer see any results show up in Kibana. Searching online seems to tell me that this is a problem with indexing at the ElasticSearch stage not keeping up. I would like to understand what my options are to resolve this.
Hi Christian, thanks for your response. I think I may have misdiagnosed my issue. It seems that when Fluentd tries to flush its chunks to Logstash, it runs into a timeout. I don't see any errors on my Logstash instance and am running with DEBUG so not sure how I can figure out the cause of the timeout.
Additionally, I see that the console log for Logstash shows the ruby debug output for events that were emitted over 30 minutes ago. So it seems like something is backed up on the Logstash side.
Any suggestions on where to look or how to debug the issue?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.