Merge did not occur automatically

Hi,
I encountered 'Too many open files' after indexing a little more than 1 billion docs.
I use ES 1.19.3, 6 nodes in the cluster, 67 indices, 32 shards / index, 1 replica / index.
My /etc/security/limits.conf allows 100,000 file descriptors per proc.
Some of elasticsearch processes indeed exceeded this limit. (100040)

I use merge policy as default (tiered). According to http://www.elasticsearch.org/guide/reference/index-modules/merge.html, index.merge.policy.segments_per_tier is 10.
I thought the number of Lucene index segments per elasticsearch index would never exceed this number.
So I estimated the number of open files would not exceed
(67 indices * 32 shards * 2 copies * 10 Lucene index segments / 6 nodes) + (some other ~ 10,000) ~ 20,000

I did 'lsof' and found there are more than 10 Lucene index segments in many of my indices.
Ex) sudo /usr/sbin/lsof | grep <elasticsearch proc #> | grep 'nodes/0/indices//<shard #>/.*.fdt' | wc -l, I got 26

I checked curl -XGET localhost:9200/_nodes/stats?pretty=true, and found "merges":{"current":0
I think this means no merge was performed, right?

So, here's my question:
How can I configure elasticsearch to automatically perform merge during indexing?

Thank you for your help.

correction!

Incorrect:

(67 indices * 32 shards * 2 copies * 10 Lucene index segments / 6 nodes) + (some other ~ 10,000) ~ 20,000

Correct:
((67 indices * 32 shards * 2 copies * 10 Lucene index segments * about 10 files per Lucene index) / 6 nodes) + (some other descriptors < 10,000) < 20,000 file descriptors

Sorry, there was a typo!



Correct:

((67 indices * 32 shards * 2 copies * 10 Lucene index segments * about 10 files per Lucene index segment) / 6 nodes) + (some other descriptors < 10,000) < 20,000 file descriptors

Hi,

I won't rely on calculating file descriptors by a dry formula, I would just
count evidence.

On the ES cluster here (0.19.8) I observe 120-150 open files per shard
(with default settings, no segment merge tuning, not optimized). Each of
the 3 nodes have 32 shards (4 indexes of 2*12 shards each, that is 1
replica level) resulting in a total number of open files of 4700-4800 on
each node (counted by lsof, at idle time = low query, no indexing).

If I take my numbers and count your 6 nodes with 67 indices and 32 shards
per index, each node must have 1072 shards (or even 2144 with replica?).
That is a whopping lot if you ask me. With my worst case factor of 150, I
get roughly over 150.000 file descriptors per node. You are very lucky you
did not encounter "too many open files" even earlier!

I would recommend to halve the number of shards per node, by doubling the
nodes, or halve the indexes, or the shards per index. If that is not on
option, go for compound format.

Best regards,

Jörg

On Thursday, July 26, 2012 6:57:33 PM UTC+2, arta wrote:

Hi,
I encountered 'Too many open files' after indexing a little more than 1
billion docs.
I use ES 1.19.3, 6 nodes in the cluster, 67 indices, 32 shards / index, 1
replica / index.
My /etc/security/limits.conf allows 100,000 file descriptors per proc.
Some of elasticsearch processes indeed exceeded this limit. (100040)

I use merge policy as default (tiered). According to
Elasticsearch Platform — Find real-time answers at scale | Elastic,
index.merge.policy.segments_per_tier is 10.
I thought the number of Lucene index segments per elasticsearch index
would
never exceed this number.
So I estimated the number of open files would not exceed
(67 indices * 32 shards * 2 copies * 10 Lucene index segments / 6 nodes) +
(some other ~ 10,000) ~ 20,000

I did 'lsof' and found there are more than 10 Lucene index segments in
many
of my indices.
Ex) sudo /usr/sbin/lsof | grep <elasticsearch proc #> | grep
'nodes/0/indices//<shard #>/.*.fdt' | wc -l, I got 26

I checked curl -XGET localhost:9200/_nodes/stats?pretty=true, and found
"merges":{"current":0
I think this means no merge was performed, right?

So, here's my question:
How can I configure elasticsearch to automatically perform merge during
indexing?

Thank you for your help.

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/merge-did-not-occur-automatically-tp4020873.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

Thanks for the reply Jorg,
I will eventually increase number of hardware, that's why number of shards is relatively big now.
We can't increase the number later on, I had to start with that number from the beginning...

As for the number of shards, yes I have 4288 shards total.
If I divide it by 6 (current number of nodes), it becomes 67322/6=715 shards per node.
Taking your number 150, the number of file descriptors needed will be 107250.
The limits.con number (100,000) is an arbitrary number and system limit is much higher.
Maybe I need to increase the number, but I want to know how many more is enough.

Hi,

It's also worth visualizing that in the 'tiered' merge policy there are
multiple 'tiers' of segments. You've configured that there can be 10
segments per tier but since there can multiple tiers you might end up with
many more segments. Merging will be occurring while you index when the
policy feels it's appropriate but you may very well continue to have many
segments in your index (which can actually improve performance).

On Friday, July 27, 2012 1:03:07 PM UTC+12, arta wrote:

Thanks for the reply Jorg,
I will eventually increase number of hardware, that's why number of shards
is relatively big now.
We can't increase the number later on, I had to start with that number
from
the beginning...

As for the number of shards, yes I have 4288 shards total.
If I divide it by 6 (current number of nodes), it becomes 67322/6=715
shards per node.
Taking your number 150, the number of file descriptors needed will be
107250.
The limits.con number (100,000) is an arbitrary number and system limit is
much higher.
Maybe I need to increase the number, but I want to know how many more is
enough.

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/merge-did-not-occur-automatically-tp4020873p4020898.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

Thanks Chris,

Merging will be occurring while you index when the policy feels it's appropriate

Is there any information what is the criteria that "the policy feels it's appropriate"?
Or any guidance which code I should read would be very helpful.

If you feel comfortable looking through some fairly intense Lucene code
then I recommend looking at
http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_3_6/lucene/core/src/java/org/apache/lucene/index/TieredMergePolicy.java?revision=1362113&view=markup
and
http://svn.apache.org/viewvc/lucene/dev/branches/lucene_solr_3_6/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java?revision=1324898&view=markup

There is also quite a nice blog entry about merging at

On Saturday, July 28, 2012 4:24:19 AM UTC+12, arta wrote:

Thanks Chris,

Merging will be occurring while you index when the policy feels it's
appropriate

Is there any information what is the criteria that "the policy feels it's
appropriate"?
Or any guidance which code I should read would be very helpful.

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/merge-did-not-occur-automatically-tp4020873p4020925.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

May I ask how many documents you have in your index. You are running a lot
of shards and I wonder if that is really necessary.
Anyway, there is another way of reducing the number of open files in Lucene
ie ES. You can use also use the Compound-File-System (CFS) option that
forces per-segment files to be packed into a single compound file. This
"can" influence your search performance so you should evaluate but reduces
the # of open files dramatically.
see Elasticsearch Platform — Find real-time answers at scale | Elastic -->
index.compound_format

simon

On Saturday, July 28, 2012 6:57:17 AM UTC+2, Chris Male wrote:

If you feel comfortable looking through some fairly intense Lucene code
then I recommend looking at
[Apache-SVN] Contents of /lucene/dev/branches/lucene_solr_3_6/lucene/core/src/java/org/apache/lucene/index/TieredMergePolicy.java
and
[Apache-SVN] Contents of /lucene/dev/branches/lucene_solr_3_6/lucene/core/src/java/org/apache/lucene/index/MergePolicy.java

There is also quite a nice blog entry about merging at
Changing Bits: Visualizing Lucene's segment merges

On Saturday, July 28, 2012 4:24:19 AM UTC+12, arta wrote:

Thanks Chris,

Merging will be occurring while you index when the policy feels it's
appropriate

Is there any information what is the criteria that "the policy feels it's
appropriate"?
Or any guidance which code I should read would be very helpful.

--
View this message in context:
http://elasticsearch-users.115913.n3.nabble.com/merge-did-not-occur-automatically-tp4020873p4020925.html
Sent from the Elasticsearch Users mailing list archive at Nabble.com.

Thanks for your guidance, Chris.
I'll read the source code. (Hopefully I can understand them..)

Thanks for your input, Simon.
I'll take a look and experiment CFS.

May I ask how many documents you have in your index.
I will eventually have around 20 billions of docs indexed.
20,000,000,000 / 2,000 shards ~ 10 million docs / shard
I wanted to have more shards (in other words, less docs per shard),
but I encountered 'Too many open files' issue even in my experimental run I did a couple of month ago,
so I gave up having more shards.
Maybe I could have increased the number of hardware, but there was a budget issue, too :slight_smile: