I'm starting a project to index log files. I don't particularly want to
wait until the log files roll over. There will be files from 100's of apps
running across 100's of machines (not all apps intersect with all machines,
but you get the drift). Some roll over very fast; some may take days.
The problem comes that if I am constantly reindexing the same document
(same id) am I loosing all old space (store and or index) or is
Elasticsearch/Lucene smart enough to say here's a new version we'll
overwrite the old store/index entries and point to this one where they are
the same and add new ones.
Certainly, there is a more sophisticated model that treats every line as a
unique document/row such that this doesn't become an issue, but I'm not
ready to spend that kind of dev and hardware at this issue. (Our
elasticsearch solution is wrapped in a system that becomes really heavy
handed when indexing such small pieces.)
Lucene will hold onto deleted documents until a merged is performed. An
update in Lucene is basically an atomic delete/insert.
An optimize will help reclaim the space used by deleted documents. Did you
change your merge settings? Deleted documents should eventually be removed
whenever new segments are created.
I'm starting a project to index log files. I don't particularly want to
wait until the log files roll over. There will be files from 100's of apps
running across 100's of machines (not all apps intersect with all machines,
but you get the drift). Some roll over very fast; some may take days.
The problem comes that if I am constantly reindexing the same document
(same id) am I loosing all old space (store and or index) or is
Elasticsearch/Lucene smart enough to say here's a new version we'll
overwrite the old store/index entries and point to this one where they are
the same and add new ones.
Certainly, there is a more sophisticated model that treats every line as a
unique document/row such that this doesn't become an issue, but I'm not
ready to spend that kind of dev and hardware at this issue. (Our
elasticsearch solution is wrapped in a system that becomes really heavy
handed when indexing such small pieces.)
I haven't changed my merge settings. How often should segments be created
and how often should merges happen naturally?
On Jun 4, 2014 4:58 PM, "Ivan Brusic" ivan@brusic.com wrote:
Lucene will hold onto deleted documents until a merged is performed. An
update in Lucene is basically an atomic delete/insert.
An optimize will help reclaim the space used by deleted documents. Did you
change your merge settings? Deleted documents should eventually be removed
whenever new segments are created.
I'm starting a project to index log files. I don't particularly want to
wait until the log files roll over. There will be files from 100's of apps
running across 100's of machines (not all apps intersect with all machines,
but you get the drift). Some roll over very fast; some may take days.
The problem comes that if I am constantly reindexing the same document
(same id) am I loosing all old space (store and or index) or is
Elasticsearch/Lucene smart enough to say here's a new version we'll
overwrite the old store/index entries and point to this one where they are
the same and add new ones.
Certainly, there is a more sophisticated model that treats every line as
a unique document/row such that this doesn't become an issue, but I'm not
ready to spend that kind of dev and hardware at this issue. (Our
elasticsearch solution is wrapped in a system that becomes really heavy
handed when indexing such small pieces.)
The default merge policy in Lucene (TieredMergePolicy) has a bias towards
segments with more deletes, so it is "trying" to merge those ones away.
You can increase this bias by setting index.reclaim_deletes_weight (see
) but be careful not to make it so high that awful merges are being
selected.
If you want to see the gory details, as of Elasticsearch 1.2 you can turn
on lucene.iw: TRACE in config/logging.yml to see when merges run, which
segments, how many deletes those segments had, etc.
On Thu, Jun 5, 2014 at 9:12 AM, Shannon Monasco smonasco@gmail.com wrote:
I haven't changed my merge settings. How often should segments be created
and how often should merges happen naturally?
On Jun 4, 2014 4:58 PM, "Ivan Brusic" ivan@brusic.com wrote:
Lucene will hold onto deleted documents until a merged is performed. An
update in Lucene is basically an atomic delete/insert.
An optimize will help reclaim the space used by deleted documents. Did
you change your merge settings? Deleted documents should eventually be
removed whenever new segments are created.
I'm starting a project to index log files. I don't particularly want to
wait until the log files roll over. There will be files from 100's of apps
running across 100's of machines (not all apps intersect with all machines,
but you get the drift). Some roll over very fast; some may take days.
The problem comes that if I am constantly reindexing the same document
(same id) am I loosing all old space (store and or index) or is
Elasticsearch/Lucene smart enough to say here's a new version we'll
overwrite the old store/index entries and point to this one where they are
the same and add new ones.
Certainly, there is a more sophisticated model that treats every line as
a unique document/row such that this doesn't become an issue, but I'm not
ready to spend that kind of dev and hardware at this issue. (Our
elasticsearch solution is wrapped in a system that becomes really heavy
handed when indexing such small pieces.)
The default merge policy in Lucene (TieredMergePolicy) has a bias towards
segments with more deletes, so it is "trying" to merge those ones away.
You can increase this bias by setting index.reclaim_deletes_weight (see Elasticsearch Platform — Find real-time answers at scale | Elastic
) but be careful not to make it so high that awful merges are being
selected.
If you want to see the gory details, as of Elasticsearch 1.2 you can turn
on lucene.iw: TRACE in config/logging.yml to see when merges run, which
segments, how many deletes those segments had, etc.
On Thu, Jun 5, 2014 at 9:12 AM, Shannon Monasco smonasco@gmail.com
wrote:
I haven't changed my merge settings. How often should segments be
created and how often should merges happen naturally?
On Jun 4, 2014 4:58 PM, "Ivan Brusic" ivan@brusic.com wrote:
Lucene will hold onto deleted documents until a merged is performed.
An update in Lucene is basically an atomic delete/insert.
An optimize will help reclaim the space used by deleted documents. Did
you change your merge settings? Deleted documents should eventually be
removed whenever new segments are created.
I'm starting a project to index log files. I don't particularly want
to wait until the log files roll over. There will be files from 100's of
apps running across 100's of machines (not all apps intersect with all
machines, but you get the drift). Some roll over very fast; some may take
days.
The problem comes that if I am constantly reindexing the same document
(same id) am I loosing all old space (store and or index) or is
Elasticsearch/Lucene smart enough to say here's a new version we'll
overwrite the old store/index entries and point to this one where they are
the same and add new ones.
Certainly, there is a more sophisticated model that treats every line
as a unique document/row such that this doesn't become an issue, but I'm
not ready to spend that kind of dev and hardware at this issue. (Our
elasticsearch solution is wrapped in a system that becomes really heavy
handed when indexing such small pieces.)
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.