Im having a little bit of trouble finding a way to approach resilience on my Elasticsearch cluster.
Currently i have 2 Logstash nodes for resilience.
Each one of them are pointing to my 4 Elasticsearch ingest/data_hot nodes.
Plus, i have more 3 master nodes coordinating it all.
My problem is, if one of my ingest/data_nodes fails or go down, Logstash is stopping sending data to elastic giving me the error "Elasticsearch Unreachable: [...] connect timed out"
Is there any way to avoid this errors and keep sending data to the Elasticsearch alive nodes?
Thank you for your help!!
Logstash should load balance if you have more than one node in the output configuration, it would give you his error, but would retry to send using another node.
Currently im also not using replicas, so there are some primary shards missing because of the node that is down.
In case the logstash keeps indexing when a node is unavailable, maybe thats the reason im not seeing any indexing action?!?
Gonna implement it.
Plus, i have another question, is there any way to update the system indices number of replicas?, every one at once. would like to increase them for 3 but i cant find a way to do it!?
I mean, i would like to have 1 replicas of the normal indices, and 3 of the system.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.