Hello Team
I am facing some serious problems in elastic. I am getting below error in elastic logs
/opt/SOME_DIRECTORY_NAME/elasticsearch/data/nodes/0/indices/7QRpS9mnSwCHPdBwaLUJag/3/translog/translog-53.ckp: Too many open files
Due to the above error, Elastic is not getting initialized .
With lsof command more the 40000000 files are getting opened.
And when I stop the elastic process only 2k to 4k files are opened.
When elastic process is launched many of the files are getting opened multiple times
for eg with lsof command below file is getting opened more then 60
to 70 time
/opt/SOME_DIRECTORY_NAME/elasticsearch/data/nodes/0/indices/zmE65g1QRcCJ6rQIp97w8g/0/translog/translog-107.ckp
This is happening for all the files under this directory:
/opt/SOME_DIRECTORY_NAME/elasticsearch/data/nodes/0/indices/
ulimit -a output
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 63457
max locked memory (kbytes, -l) unlimited
max memory size (kbytes, -m) unlimited
open files (-n) 1000000
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 4096
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
one strange thing which I have noticed is more no of the file are created in the below directory:
/opt/SOME_DIRECTORY_NAME/elasticsearch/data/nodes/0/
in this directory, all elastic index data is stored.
More then 400000 file are created in this directory
find /opt/SOME_DIRECTORY_NAME/elasticsearch/data/nodes/0/ -type f| wc -l
400000