I won't rely on calculating file descriptors by a dry formula, I would just
On the ES cluster here (0.19.8) I observe 120-150 open files per shard
(with default settings, no segment merge tuning, not optimized). Each of
the 3 nodes have 32 shards (4 indexes of 2*12 shards each, that is 1
replica level) resulting in a total number of open files of 4700-4800 on
each node (counted by lsof, at idle time = low query, no indexing).
If I take my numbers and count your 6 nodes with 67 indices and 32 shards
per index, each node must have 1072 shards (or even 2144 with replica?).
That is a whopping lot if you ask me. With my worst case factor of 150, I
get roughly over 150.000 file descriptors per node. You are very lucky you
did not encounter "too many open files" even earlier!
I would recommend to halve the number of shards per node, by doubling the
nodes, or halve the indexes, or the shards per index. If that is not on
option, go for compound format.
On Thursday, July 26, 2012 6:57:33 PM UTC+2, arta wrote:
I encountered 'Too many open files' after indexing a little more than 1
I use ES 1.19.3, 6 nodes in the cluster, 67 indices, 32 shards / index, 1
replica / index.
My /etc/security/limits.conf allows 100,000 file descriptors per proc.
Some of elasticsearch processes indeed exceeded this limit. (100040)
I use merge policy as default (tiered). According to
index.merge.policy.segments_per_tier is 10.
I thought the number of Lucene index segments per elasticsearch index
never exceed this number.
So I estimated the number of open files would not exceed
(67 indices * 32 shards * 2 copies * 10 Lucene index segments / 6 nodes) +
(some other ~ 10,000) ~ 20,000
I did 'lsof' and found there are more than 10 Lucene index segments in
of my indices.
Ex) sudo /usr/sbin/lsof | grep <elasticsearch proc #> | grep
'nodes/0/indices//<shard #>/.*.fdt' | wc -l, I got 26
I checked curl -XGET localhost:9200/_nodes/stats?pretty=true, and found
I think this means no merge was performed, right?
So, here's my question:
How can I configure elasticsearch to automatically perform merge during
Thank you for your help.
View this message in context:
Sent from the ElasticSearch Users mailing list archive at Nabble.com.