Too many File Descriptors open

Hi all,

In our Dev-environment (ES single node setup) we have seen that the open file descriptors for elastic search are 27491596.

(base) root@root:~$ lsof | awk '{print $2}'| uniq -c | sort -rn | head
      The output information may be incomplete.
27485413 3156
  27000 2578
  18432 2503
  11400 9536
  10584 4987
   5684 15220
   4902 1039
   3840 23025
   3720 23036
   1084 22031
(base) root@root:~$ ps -eAf | grep 3156
root        3156  3100 28 Jun07 ?        1-10:07:07 /opt/jdk-12.0.1/bin/java -Xms4g -Xmx4g -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -Des.networkaddress.cache.ttl=60 -Des.networkaddress.cache.negative.ttl=10 -XX:+AlwaysPreTouch -Xss1m -Djava.awt.headless=true -Dfile.encoding=UTF-8 -Djna.nosys=true -XX:-OmitStackTraceInFastThrow -Dio.netty.noUnsafe=true -Dio.netty.noKeySetOptimization=true -Dio.netty.recycler.maxCapacityPerThread=0 -Dlog4j.shutdownHookEnabled=false -Dlog4j2.disable.jmx=true -Djava.io.tmpdir=/tmp/elasticsearch-14429444041004318688 -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=data -XX:ErrorFile=logs/hs_err_pid%p.log -Xlog:gc*,gc+age=trace,safepoint:file=logs/gc.log:utctime,pid,tags:filecount=32,filesize=64m -Djava.locale.providers=COMPAT -XX:UseAVX=2 -Des.cgroups.hierarchy.override=/ -Des.path.home=/usr/share/elasticsearch -Des.path.conf=/usr/share/elasticsearch/config -Des.distribution.flavor=default -Des.distribution.type=docker -cp /usr/share/elasticsearch/lib/* org.elasticsearch.bootstrap.Elasticsearch -Ediscovery.type=single-node

It looks way too much and as an ops person, it makes me worried to see this many file descriptors. Is there a way we can try and reduce this??

Can anyone please suggest a good approach to handle this kind of setup?

This is not an accurate measurement since lsof reports each file descriptor multiple times, usually once per thread, and Elasticsearch has a lot of threads.

sorry my bad

(base) root@root:~$ sudo ls -l /proc/3156/fd | wc -l
164593

above is the number

curl --request GET \
> --url http://localhost:9200/_nodes/stats/process
{"_nodes":{"total":1,"successful":1,"failed":0},"cluster_name":"docker-cluster","nodes":{"jKhiPymDS1GPupuJoqgdOA":{"timestamp":1591949218940,"name":"jKhiPym","transport_address":"172.17.0.3:9300","host":"172.17.0.3","ip":"172.17.0.3:9300","roles":["master","data","ingest"],"attributes":{"ml.machine_memory":"67406077952","xpack.installed":"true","ml.max_open_jobs":"20","ml.enabled":"true"},"process":{"timestamp":1591949218941,"open_file_descriptors":164593,"max_file_descriptors":1048576,"cpu":{"percent":4,"total_in_millis":124889660},"mem":{"total_virtual_in_bytes":135503880192}}}}}

That seems much more reasonable. Why is this causing a problem?

Was wondering if the number of file descriptors are okay for a single node ES setup and will this create bottle necks in the performance of a production machine which uses Intel® Xeon® Gold 6230 Processor.

I think it wont create much of a chaos. Please correct me if im wrong .

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.