Too many open files

Hi all,
I am using in elasticsearch and I am getting this error " Too many open files"
I'm running the following command: sudo lsof | grep java | grep mpidcmgr | wc -l,the result increased after any query to elasticsearch.
Why the number of files increases and not decreases?
What's my problem? Do you have a solution to this problem?

Full exciption : Too many open files
at Method) ~[na:1.8.0_77]
at ~[na:1.8.0_77]
at ~[na:1.8.0_77]
at [io.netty.netty-3.10.6.Final.jar:na]
at [io.netty.netty-3.10.6.Final.jar:na]
at [io.netty.netty-3.10.6.Final.jar:na]
at [io.netty.netty-3.10.6.Final.jar:na]
at org.jboss.netty.util.internal.DeadLockProofWorker$ [io.netty.netty-3.10.6.Final.jar:na]
at java.util.concurrent.ThreadPoolExecutor.runWorker( [na:1.8.0_77]
at java.util.concurrent.ThreadPoolExecutor$ [na:1.8.0_77]
at [na:1.8.0_77]
Add Comment Collaps

The file descriptor threads are increased because the data might be stored on multiple segments / files. So in order to read / write to these segments they must be opened. I would make sure the ulimit for the user running the ES process has at least 65k for open files and max user processes.

Where checking?
And how to increase?

Hello again,
I found the following guide: Http:// , and I did exactly what they proposed.

Limit of the number of files has increased but after stress test ( Jmeter) I got the error again.
before the change I got the error after 2000 open files and now after 10000 open files.

I don't want to crash ...
How often open files are closed?
What is the trigger?
Do you have a solution to this problem?

How often open files are closed?
Open files are closed when the application / process that needed the thread finishes.

What is the trigger?
I do not know the trigger without heavily debugging this. However, if you say this happens when you run your queries in ES with Jmeter as the benchmarking application I'm leaning toward the benchmarking load it is doing. I would like to know the level of benchmarking you are doing. How many queries you are running and how many indices the query is hitting.

Do you have a solution to this problem?
Once we find the trigger then I can propose a solution. I would cut the benchmarking load to 1/10 and see if you still get the same problem and slowly increasing it.

Is it necessary to close the connection after each query?

this.client = new PreBuiltXPackTransportClient(settings)
.addTransportAddress(new InetSocketTransportAddress(InetAddress.getByName(elasticSearchHost), elasticSearchPort));



No. Just close the client when your application stops.

BTW use a singleton for the client.

1 Like

Ok, thanks.
But I still have a problem with the growth of the number of files.
Why it causes to my server getting stuck?

I tried to add limitations like the following:

The following code works and returns an answer but without limitations "
Settings settings = Settings.builder().put("", clusterName)
.put("", user)
.put("", true)
.put("request.headers.X-Found-Cluster", "${}").build()

The following code returns the following error but with limitations:

  Settings settings = Settings.builder().put("", clusterName)
            .put("", user)
            .put("", true)
            .put("request.headers.X-Found-Cluster", "${}")
            .put("client.transport.sniff", true)
            .put("", 8)
            .put("", 100)

NoNodeAvailableException[None of the configured nodes are available: [{#transport#-1}{i3fGwgU7QNm60-yTrpFtIQ}{}{}]]

Hard to say. May be share a code example?

Please format your code using </> icon as explained in this guide. It will make your post more readable.

Or use markdown style like:



Yeah. You never initialized your elasticsearchConnectionProvider object.
So you were building everytime a new instance.

Please don't remove your code as it can help others.
It's better to see what you did to fix it.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.