I've tried asking on irc but can't get anybody to respond. I'm curious what happens when you decomission a node and later bring it back online.
For example, say you had a node and you decommission it, allowing all the indexes to be replicated elsewhere. Then you used scan / scroll to remove and update documents from indices on the cluster. Then finally you bring the decomissioned node back online. Will the indices in the recommissioned node be updated / discarded / validated? Or should you remove the data off the node when decommissioning it?
Does the version of Elasticsearch have an impact on this?
ES will discover that the shards on the temporarily decommissioned node are out of date and will make fresh copies from the up to date shards on the nodes that were online while it was gone.
Not sure what you mean by final question about ES version dependency, unless you're asking if the above holds true for any version of ES. I'm not sure, but I'd be extremely surprised if it wasn't so.
Then there is no updating of indices similar to hdfs. We could imagine that indices could be updated instead of fresh copies being made, using a transaction log or similar. It is nice to understand what is happening underneath the surface.
Then I will try to imagine what is happening.
The index shards are checksummed on the node as it is brought up, against sums stored on the master. If they are out of date, they are invalidated (deleted?). If they are up to date, they are marked as valid, and we might choose to relocate replicas to this node.
Is there any documentation which describes the process in more technical detail, available to the public?
Then there is no updating of indices similar to hdfs. We could imagine that indices could be updated instead of fresh copies being made, using a transaction log or similar. It is nice to understand what is happening underneath the surface.
Unless I'm completely mistaken the index won't necessarily be copied in its entirety. ES will copy the segment files as needed, which depending on how much segment merging has taken place on the primary shard could result in more or less data being copied.
The index shards are checksummed on the node as it is brought up, against sums stored on the master. If they are out of date, they are invalidated (deleted?). If they are up to date, they are marked as valid, and we might choose to relocate replicas to this node.
Yes, I believe that's pretty much what happens.
Is there any documentation which describes the process in more technical detail, available to the public?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.