I am running logstash on server 1 and I have configured Elasticsearch and kibana on server 2 resulting that server 1's logstash will send data/logs to server 2's Elasticsearch.
Recently I have encountered one problem regarding this setup which was when Elasticsearch on server 2 is down but server 1's logstash tries to send logs to that and keep on spawning threads which result in high CPU utilization.
I am looking for the solution in which if Elasticseach is down, Logstash should not send any data to that Elasticsearch. I'm not using ES cluster.
I would not expect to see high CPU in those circumstances. If the elasticsearch output cannot open a connection to the ES instance it should retry, the internal queues should fill, and back-pressure should stop the inputs processing. I would expect the elasticsearch instance to need maybe 2% of a CPU to try to resurrect the connection.
What makes you think that logstash is spawning additional threads? Have you used the hot threads API to monitor what it is doing?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.