Elastic search error - translog: Too many open files

(Lokhande) #1

We are having 3 nodes cluster for ES. Based on suggestion from expert on creating indices, we have followed the approach that we will be having indices created date wise using template.
With this approach we found every day the list of indices is increasing in numbers where each index is having 5 shards. One day we observed that we are getting error in ES cluster as translog:
Too many open files.... and so we checked the ulimit which is set as 16k but we found that the number of files opened by Elastic search on each node is >20k which was actually causing this issue.
It has been observed that Elasticsearch opens files when opening the index and then keeps them open where it opens number of indices multiply by 5 ( shards) files and due to this approach of creating of index everyday this count reached to 20k which is > 16k ulimit.

Is there any way we can restrict Elasticsearch on opening these files so that it keep this limit restricted to ulimit set on each Unix node?


(Magnus Bäck) #2

I suspect this isn't possible. Do you really need five shards per day? With a three-node cluster that's almost certainly excessive. How big is each index?

(Lokhande) #3

Thanks for reply Magnus, sorry for late reply on this since I was on vacation. Yes, we have got suggestion to reduce the number of shards. As of now we are going with 1 shard to tackle this issue and to reduce number of files to be opened per index. Index size vary from few MBs to GBs. But still is there any way we can restrict the number of files to be opened as configurable to avoid such issues ?

(Magnus Bäck) #4

No, I don't believe there is a way to limit the number of open files.

(system) #5