Hi all
i have an index of around 10 gb , with 5 shards , and 0 replicas.
i submit an optimize command through rest with max_number_segments =5 ,
while using default merge policy. Although it returns with a proper
response on the browser , i do not see any activity happening in the
background, no io waits increasing , no cpu cycle increase in utilization .
The index is distributed on 5 nodes , along with other indexes , with each
shard on separate machine/node , with being allocated 8 Gb heap size each.
Could somebody please help me figure out how/why this is happening and how
could i resolve it ?
can you check how many segments you actually have via the segments API?
simon
On Wednesday, April 10, 2013 4:29:57 PM UTC+2, tarang dawer wrote:
Hi all
i have an index of around 10 gb , with 5 shards , and 0 replicas.
i submit an optimize command through rest with max_number_segments =5 ,
while using default merge policy. Although it returns with a proper
response on the browser , i do not see any activity happening in the
background, no io waits increasing , no cpu cycle increase in utilization .
The index is distributed on 5 nodes , along with other indexes , with each
shard on separate machine/node , with being allocated 8 Gb heap size each.
Could somebody please help me figure out how/why this is happening and how
could i resolve it ?
i checked , i still have 18 segments for the index , each consisting of
varying size, 90 , 51 , 65 42 mb's etc, for each shard . Despite the
optimize command , the segments structure remains the same. is there
something else required to be done to "force" merging / optimization ?
On Wednesday, April 10, 2013 4:29:57 PM UTC+2, tarang dawer wrote:
Hi all
i have an index of around 10 gb , with 5 shards , and 0 replicas.
i submit an optimize command through rest with max_number_segments =5 ,
while using default merge policy. Although it returns with a proper
response on the browser , i do not see any activity happening in the
background, no io waits increasing , no cpu cycle increase in utilization .
The index is distributed on 5 nodes , along with other indexes , with
each shard on separate machine/node , with being allocated 8 Gb heap size
each.
Could somebody please help me figure out how/why this is happening and
how could i resolve it ?
That math is right. You will have max 5 segments per shard. You can force
down to 5 segments total, 1 per shard by setting max_number_segments=1.
Best Regards,
Paul
On Wednesday, April 10, 2013 9:30:53 AM UTC-6, tarang dawer wrote:
i checked , i still have 18 segments for the index , each consisting of
varying size, 90 , 51 , 65 42 mb's etc, for each shard . Despite the
optimize command , the segments structure remains the same. is there
something else required to be done to "force" merging / optimization ?
On Wednesday, April 10, 2013 4:29:57 PM UTC+2, tarang dawer wrote:
Hi all
i have an index of around 10 gb , with 5 shards , and 0 replicas.
i submit an optimize command through rest with max_number_segments =5 ,
while using default merge policy. Although it returns with a proper
response on the browser , i do not see any activity happening in the
background, no io waits increasing , no cpu cycle increase in utilization .
The index is distributed on 5 nodes , along with other indexes , with
each shard on separate machine/node , with being allocated 8 Gb heap size
each.
Could somebody please help me figure out how/why this is happening and
how could i resolve it ?
Thanks
Tarang Dawer
--
You received this message because you are subscribed to the Google Groups
"elasticsearch" group.
To unsubscribe from this group and stop receiving emails from it, send an
email to elasticsearc...@googlegroups.com <javascript:>.
For more options, visit https://groups.google.com/groups/opt_out.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.