Hi all,
I am using in elasticsearch and I am getting this error "java.io.IOException: Too many open files"
I'm running the following command: sudo lsof | grep java | grep mpidcmgr | wc -l,the result increased after any query to elasticsearch.
Why the number of files increases and not decreases?
What's my problem? Do you have a solution to this problem?
Full exciption :
java.io.IOException: Too many open files
at sun.nio.ch.ServerSocketChannelImpl.accept0(Native Method) ~[na:1.8.0_77]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:422) ~[na:1.8.0_77]
at sun.nio.ch.ServerSocketChannelImpl.accept(ServerSocketChannelImpl.java:250) ~[na:1.8.0_77]
at org.jboss.netty.channel.socket.nio.NioServerBoss.process(NioServerBoss.java:100) [io.netty.netty-3.10.6.Final.jar:na]
at org.jboss.netty.channel.socket.nio.AbstractNioSelector.run(AbstractNioSelector.java:337) [io.netty.netty-3.10.6.Final.jar:na]
at org.jboss.netty.channel.socket.nio.NioServerBoss.run(NioServerBoss.java:42) [io.netty.netty-3.10.6.Final.jar:na]
at org.jboss.netty.util.ThreadRenamingRunnable.run(ThreadRenamingRunnable.java:108) [io.netty.netty-3.10.6.Final.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$1.run(DeadLockProofWorker.java:42) [io.netty.netty-3.10.6.Final.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142) [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617) [na:1.8.0_77]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_77]
Add Comment Collaps
The file descriptor threads are increased because the data might be stored on multiple segments / files. So in order to read / write to these segments they must be opened. I would make sure the ulimit for the user running the ES process has at least 65k for open files and max user processes.
Limit of the number of files has increased but after stress test ( Jmeter) I got the error again.
before the change I got the error after 2000 open files and now after 10000 open files.
I don't want to crash ...
How often open files are closed?
What is the trigger?
Do you have a solution to this problem?
How often open files are closed?
Open files are closed when the application / process that needed the thread finishes.
What is the trigger?
I do not know the trigger without heavily debugging this. However, if you say this happens when you run your queries in ES with Jmeter as the benchmarking application I'm leaning toward the benchmarking load it is doing. I would like to know the level of benchmarking you are doing. How many queries you are running and how many indices the query is hitting.
Do you have a solution to this problem?
Once we find the trigger then I can propose a solution. I would cut the benchmarking load to 1/10 and see if you still get the same problem and slowly increasing it.
Ok, thanks.
But I still have a problem with the growth of the number of files.
Why it causes to my server getting stuck?
I tried to add limitations like the following:
The following code works and returns an answer but without limitations "
Settings settings = Settings.builder().put("cluster.name", clusterName)
.put("xpack.security.user", user)
.put("xpack.security.transport.ssl.enabled", true)
.put("request.headers.X-Found-Cluster", "${cluster.name}").build()
The following code returns the following error but with limitations:
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.