I have a 3 node cluster (2 data/master and 1 master/no data. It is being fed by 2 logstash servers. I have everything tuned based on the production setting suggestions (max file descriptors set to 64K, setting ES Heap Size, turning off swap, enabling mlockall). Everything was running fine until it just seemingly locked up and I started getting "Too many open files" messages in the elasticsearch.log.
Is there a way to tell what pushed the data servers over the edge? Are there other performance settings to look at?
This is a bit high but not insane. I'm very surprised that you manage to reach the max number of file descriptors with that many shards. Any chance that you can count how many files you have in your data directories? Also can you tell us the maximum number of file descriptors as seen by elasticsearch? https://www.elastic.co/guide/en/elasticsearch/guide/current/_file_descriptors_and_mmap.html
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.