I'm trying to pass MAX_OPEN_FILES parameter to JVM.
Actually, I am trying to increase fd on mac os x. I made it with launchctl command but the default behavior is to set the limit value by JVM, which is 10240. Therefore, I need pass MaxFDLimit parameter jvm when Elasticsearch starting.
Doesn't the JVM by default use the process's fd limit (OPEN_MAX), indicating that you're not upping that limit correctly? Now, if you do need to pass -XX:-MaxFDLimit I don't see why setting MAX_OPEN_FILES would help. Wouldn't you actually want to append -XX:-MaxFDLimit to ES_JAVA_OPTS? Seemingly relevant: http://stackoverflow.com/a/23530494/414355
After the above step and restarted Elasticsearch, I'm getting the following response from Elasticsearch when I uses this command: curl -XGET 'localhost:9200/_nodes/process?pretty'
It's hard to believe that adding it to ES_JAVA_OPTS or mimicking what you did in the shell (i.e. passing it directly to bin/elasticsearch) doesn't work. If that indeed is the case, inspecting the actual, full, command line in both cases should provide clues.
The -XX:-MaxFDLimit flag for JVM is not doing anything useful. It was meant for old Solaris 8 OS, where the kernel allocates only 256 file descriptors per process by default.
You should be aware that this flag will be removed in future JDK releases.
This flag is not needed any more. In Linux, you have to adjust the kernel settings in /etc/security/limits.conf but the default settings for file descriptors is already increased to 10240 in current distributions, which is more than you will need for Elasticsearch.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.