Too many open files

We are running ElasticSearch 1.4.5 (CentOS 6/Java 7) and have run into an issue twice with too many open files on our staging cluster (limit set to 64k):

It seems to happen suddenly (nothing really busy on the cluster).

We are trying out Marvel and most of the open file handles seem to be for those indexes.

Restarting the cluster clears up the issue but it has now reappeared after a few days. Anyone else run into simliar issues?


Running lsof will tell you which files are open. It's possible there are tons of Marvel-related indices/shards/segments/files, yes, but lsof will tell you exactly who has what open. If you are trying ES monitoring tools, have a look at SPM, it's got pretty nice ES monitoring/alerting/anomaly detection.


Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support *

Yeah, I am seeing lots of files like /nodes/0/indices/.marvel-2015.05.27/0/translog/translog-1432684803614 (deleted)

I'll try removing Marvel to see if that helps and try out some alternatives.