Hello,
ES 1.7 ,very large cluster arount 200 Nodes ( 100B events per month) .
I am struggling to keep my number of segments per shard low without having
to perform some forced _optimize (we like to do that in order to optimize ES memory and to avoid GC's when searching for data). I thought setting "segments_per_tier" and
"max_merge_at_once" with a low value would limit the number of segments,
but no matter what I set my shards always end up with about very high
Our system store time base data , so we have one index per month , running the hot_threads API i see that the merge tasks are running only on the last index ( the one that we load data to it now )
We use the defult index.merge.* parameters , but by try to overwrite it for old index still don't do any effect :
curl -XPUT 'localhost:9200/index_name_of_past_month/_settings' -d
'{
"settings": {
"index.merge.policy.max_merge_at_once": 10,
"index.merge.scheduler.max_thread_count": 10,
"index.merge.scheduler.max_merge_count": 10,
"index.merge.policy.segments_per_tier": 5,
"index.merge.policy.max_merged_segment": "10gb"
}
}'
How to make the merge accept and my new setting ? are this is something related to throttling ?