Normally, the background merge process optimizes the index by merging
together smaller segments to create new bigger segments, at which time the
smaller segments are deleted. This process continues during scrolling, but
an open search context prevents the old segments from being deleted while
they are still in use. This is how Elasticsearch is able to return the
results of the initial search request, regardless of subsequent changes to
documents.
Tip
Keeping older segments alive means that more file handles are needed.
Ensure that you have configured your nodes to have ample free file handles.
See the section called “File Descriptorsedit”.
Hello,
Read the above description, can anyone tell what happened to the segments
after the scroll time expired? Does the segments will automatically merge?
What if a lot (like 50 active) of scroll happened and how will it impact
the lucene segment/elasticsearch? comments?
Normally, the background merge process optimizes the index by merging
together smaller segments to create new bigger segments, at which time the
smaller segments are deleted. This process continues during scrolling, but
an open search context prevents the old segments from being deleted while
they are still in use. This is how Elasticsearch is able to return the
results of the initial search request, regardless of subsequent changes to
documents.
Tip
Keeping older segments alive means that more file handles are needed.
Ensure that you have configured your nodes to have ample free file handles.
See the section called “File Descriptorsedit”.
Hello,
Read the above description, can anyone tell what happened to the segments
after the scroll time expired? Does the segments will automatically merge?
What if a lot (like 50 active) of scroll happened and how will it impact
the lucene segment/elasticsearch? comments?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.