There might be multiple reasons for this . Each OS handles this a bit differently. Some have a max files per process setting. Can you try getting the output of cat /etc/sysctl.conf to see if there’s a max file per process setting in the there that could be changed? Not sure if that file exists on all OSes though.
Definitely on MacOS. On MacOS, this is how we would fix it:
echo kern.maxfiles=65536 | sudo tee -a /etc/sysctl.conf
$ echo kern.maxfilesperproc=65536 | sudo tee -a /etc/sysctl.conf
$ sudo sysctl -w kern.maxfiles=65536
$ sudo sysctl -w kern.maxfilesperproc=65536
$ ulimit -n 65536
More logs and info about what OS you are running on might help. Also maybe disabling the optimizer would “help” but not be a good real solution. Optimizer creates new files on disk I believe so it could be causing the file limit problem. Or it could be one of the plugins you have installed? It could be anything .
System default settings live in /usr/lib/sysctl.d/00-system.conf.
To override those settings, enter new settings here, or in an /etc/sysctl.d/.conf file
For more information, see sysctl.conf(5) and sysctl.d(5).
kernel.sem = 250 1024000 32 5120
kernel.core_pattern = /var/core/core-%e-sig%s-user%u-group%g-pid%p-time%t
kernel.core_uses_pid = 1
fs.suid_dumpable = 2
vm.swappiness = 0
vm.max_map_count = 262144
fs.file-max = 16384
Kibana Version = 5.1.2
I have open source version of Kibana without xPack. I use some of my kibana-plugins (I don't think that this is the problem, because on other servers there is no such problem.)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.