I have elasticsearch 6.4.2 running on Mac, inside a container (Docker Desktop). I have about 240 indices and 1100 shards with 0 replicas set. I probably have 100k docs at max (can't say exact count because the cluster is in Red because of FDs error)
I see errors in ES logs that says too many open files, when I check
lsof | grep elasticsearch | wc -l it's giving about 800k+ FDs.
This seems to be strange, considering that number of indices isn't large and data is also small.
I have seen bigger clusters(in terms of indices/dac count) with far few FDs, I am not sure wha's the issue with my local ES cluster.
ES Data directory stats
5200 directories, 33015 files.
Majority of these files are translog files, I am not sure if this is normal or not.
Node stats endpoints gives this about FDs
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 47843
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 655366
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) unlimited
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited