What is the minimum storage space Elasticsearch nodes should have to ensure
smooth merging of segments accounting for index optimize calls. From my
understanding, lucene needs 3X index size space [1] especially during
forced merges. Our use case is not memory bound. Does that mean I should
plan capacity based on only 1/3 total disk space?
What is the minimum storage space Elasticsearch nodes should have to
ensure smooth merging of segments accounting for index optimize calls. From
my understanding, lucene needs 3X index size space [1] especially during
forced merges. Our use case is not memory bound. Does that mean I should
plan capacity based on only 1/3 total disk space?
Dont we need to consider space for compound file formats? Worst case
scenario of CFS enabled, forceMerge to 1 segment would need 40GB free space
for 2 x 10GB segments?
Also index.merge.policy.max_merge_size defaults to unbounded, is there some
recommended numbers around this?
There are no disk based circuit breaker turned on by default and the index
just appears to go red when there is no free disk space. So I wanted to set
safeguards at my end to avoid this issue.
On Friday, 3 April 2015 20:30:05 UTC-7, Mark Walkom wrote:
It needs as much space as the segments it's merging, so if you have 2 x
10GB segments then you'd want at least 20GB free.
What is the minimum storage space Elasticsearch nodes should have to
ensure smooth merging of segments accounting for index optimize calls. From
my understanding, lucene needs 3X index size space [1] especially during
forced merges. Our use case is not memory bound. Does that mean I should
plan capacity based on only 1/3 total disk space?
Dont we need to consider space for compound file formats? Worst case
scenario of CFS enabled, forceMerge to 1 segment would need 40GB free space
for 2 x 10GB segments?
Also index.merge.policy.max_merge_size defaults to unbounded, is there
some recommended numbers around this?
There are no disk based circuit breaker turned on by default and the index
just appears to go red when there is no free disk space. So I wanted to set
safeguards at my end to avoid this issue.
On Friday, 3 April 2015 20:30:05 UTC-7, Mark Walkom wrote:
It needs as much space as the segments it's merging, so if you have 2 x
10GB segments then you'd want at least 20GB free.
What is the minimum storage space Elasticsearch nodes should have to
ensure smooth merging of segments accounting for index optimize calls. From
my understanding, lucene needs 3X index size space [1] especially during
forced merges. Our use case is not memory bound. Does that mean I should
plan capacity based on only 1/3 total disk space?
Disk watermarks are of no use when all nodes on cluster are running low on
disk and it is the existing shards which receive continuous writes. It
would be great if ES can error such writes on low disk space rather than
letting the index go red.
Dont we need to consider space for compound file formats? Worst case
scenario of CFS enabled, forceMerge to 1 segment would need 40GB free space
for 2 x 10GB segments?
Also index.merge.policy.max_merge_size defaults to unbounded, is there
some recommended numbers around this?
There are no disk based circuit breaker turned on by default and the
index just appears to go red when there is no free disk space. So I wanted
to set safeguards at my end to avoid this issue.
On Friday, 3 April 2015 20:30:05 UTC-7, Mark Walkom wrote:
It needs as much space as the segments it's merging, so if you have 2 x
10GB segments then you'd want at least 20GB free.
What is the minimum storage space Elasticsearch nodes should have to
ensure smooth merging of segments accounting for index optimize calls. From
my understanding, lucene needs 3X index size space [1] especially during
forced merges. Our use case is not memory bound. Does that mean I should
plan capacity based on only 1/3 total disk space?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.