Elastic too many open files

Hai,

I have several problem after upgrading to elasticsearch, kibana, logstash v 5.6.0.
and one of issue cannot assign unassign shard because to many open files. i have incresea open files limit to 20480 and set vm.max_map_count=262144 and still cannot assign .

i have read: https://www.elastic.co/guide/en/elasticsearch/guide/master/_file_descriptors_and_mmap.html

what should i do? should i restart elasticsearch after increase limit open files?

Thank you

We had to add "elasticsearch - nofile 65536" into /etc/security/limits.conf for centos to fix issues we had.
https://www.elastic.co/guide/en/elasticsearch/reference/master/setting-system-settings.html

The session for elasticsearch user will only take the new setting on the next session so elasticsearch service restart will be required.

Also this thread After adding "elasticsearch - nofile 65536" still not enough threads are allocated
Says that need nproc not nofile but this conflicts with the documentation...

How much data and how many shards do you have in the cluster?

hi cris,

we have around 3.500 indices and 30.000 shards in cluster with 2 nodes.

there is another way than restart ES?
because we have so many shards and we stuck on 54%

That is far too much. Please read this blog post with guidance on shards and sharding, and then try to reduce that dramatically by deleting and/or reindexing data. Restoring that many shards will take time, and I am not aware of any way to speed it up, so suspect you will have to wait.

Once the cluster is back up you can try to temporarily close some indices in order to make the cluster easier to work with.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.