We recently upgraded one of our Elastic Search servers from Oracle Linux 6.9 to 7.4. After the upgrade the server has over 1 million open files allocated to the ElasticSearch process. I am using version 5.2 of Elastic. The cluster has 228 indexes and 710 shards.
I compared this with another server that was not upgraded (still on Linux 6.9). That server has about 16,000 open files for the ElasticSearch process. That server is also running version 5.2 of Elastic. The cluster has 385 indexes and 1,221 shards.
Has anyone experienced this issue? Any reason for the large number of open files?
Figured out the issue. With Linux 7 the version of lsof command changed from 4.82 to 4.87. With version 4.87 the command by default returns all open files including the thread information for each open file. With version 4.82 it did not return the thread information. This explains the reason the lsof command returned over a million lines. If the lsof command is run for a pid it will not return the thread information.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.