Small segments are merged into bigger segments, which, in turn, are merged into even bigger segments.
Because my cluster has a lot of index,so there are a lot of segments,every segment consumes file handles, memory, and CPU cycles,so the usage of heap size always over 65%,so the cluster do gc frequently.
I want to reduce the number of segments,but don't want to use _forcemerge api.
Can i set the min size of the segment,then it will be merged?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.