I have a 3 node cluster (2 data/master and 1 master/no data. It is being fed by 2 logstash servers. I have everything tuned based on the production setting suggestions (max file descriptors set to 64K, setting ES Heap Size, turning off swap, enabling mlockall). Everything was running fine until it just seemingly locked up and I started getting "Too many open files" messages in the elasticsearch.log.
Is there a way to tell what pushed the data servers over the edge? Are there other performance settings to look at?