Related Qs about "A closed index can be opened which will then go
through the normal recovery process." from
What is involved in index recovery? (I'm assuming a simple close
followed by open results in this recovery process?) And roughly how
long might that take, say for a 10GB index?
What happens if some external indexer decides to a closed index? I
assume an exception is thrown? Any way to get ES to open it, allow
write, and maybe close if after the index has seen no reads/writes for
N seconds/minutes?
What is involved in index recovery? (I'm assuming a simple close
followed by open results in this recovery process?) And roughly how
long might that take, say for a 10GB index?
It will be pretty fast, it mainly involves the cost of opening Lucene for
each shard, and replaying the transaction log.
What happens if some external indexer decides to a closed index? I
assume an exception is thrown? Any way to get ES to open it, allow
write, and maybe close if after the index has seen no reads/writes for
N seconds/minutes?
I assume you mean decides to index/search against hte index? An exception
will be thrown. You will need to manage opening / closing it yourself.
What is involved in index recovery? (I'm assuming a simple close
followed by open results in this recovery process?) And roughly how
long might that take, say for a 10GB index?
It will be pretty fast, it mainly involves the cost of opening Lucene for
each shard, and replaying the transaction log.
Probably worth noting that the transaction log step can be avoided if the
index is flushed after all docs have been added and prior to closing.
What happens if some external indexer decides to a closed index? I
assume an exception is thrown? Any way to get ES to open it, allow
write, and maybe close if after the index has seen no reads/writes for
N seconds/minutes?
I assume you mean decides to index/search against hte index? An exception
will be thrown. You will need to manage opening / closing it yourself.
It will be pretty fast, it mainly involves the cost of opening Lucene for> > each shard, and replaying the transaction log.> > Probably worth noting that the transaction log step can be avoided if the> index is flushed after all docs have been added and prior to closing.
Aha, thanks, was going to ask - replaying transaction log for a day's
worth of transaction would probably not be fast enough, but flush
before close and avoiding xa log replaying sounds like what I'm
after. Thanks!
What is involved in index recovery? (I'm assuming a simple close
followed by open results in this recovery process?) And roughly how
long might that take, say for a 10GB index?
It will be pretty fast, it mainly involves the cost of opening Lucene for
each shard, and replaying the transaction log.
Probably worth noting that the transaction log step can be avoided if the
index is flushed after all docs have been added and prior to closing.
What happens if some external indexer decides to a closed index? I
assume an exception is thrown? Any way to get ES to open it, allow
write, and maybe close if after the index has seen no reads/writes for
N seconds/minutes?
I assume you mean decides to index/search against hte index? An exception
will be thrown. You will need to manage opening / closing it yourself.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.