Recently, I stumbled unto a situation where all my indices were deleted without a warning.
I have a 3 node cluster, with elastic 1.5.2, with oracle jdk 8.45, where all nodes are master eligible and data nodes (default master-data settings).
Then, I added a fourth master eligible node with node.data set to false. The old 3 nodes were set to be data nodes only.
I started the new master node first, the new node was never connected before to a live cluster, so it had no cluster metadata. Then I started the 3 data nodes.
A couple of minutes after that (no more than 2 or 3 minutes), I discovered that my indices exist but are completely empty. In the master log I could see the log message that states that there are dangling indices and they are going to be imported. Something similar to this: dangling index, exists on local file system, but not in cluster metadata, scheduling to delete in [2h], auto import to cluster state [YES] but without the delete in 2h part.
So, I received no warning at all and my indices were gone.
Its important to note that I had several Logstash instances running elsewhere sending index requests to the data nodes while they were starting up.
What I think might have caused the data loss is that before the master node could complete the import of indices, a request, or several requests for the same indices reached the data nodes, created new empty indices, and for some reason the import failed because of that.
Your mistake is to assume that data nodes are aware of the indexes they have stored, no matter what master nodes exist. That is not the case, only master nodes keep all the cluster state.
If you create only a new empty single master, without copying old cluster state to it, and attach old data nodes to it, there is no chance old indices are recognized. Because this behavior is logical, this is not a bug, but is expected by design.
I remember the introduction of "dangling indexes".
This feature was a solution to the challenge when copying data directories were the only method to make backups, long before snapshot/restore. Users were annoyed by the fact that (master) nodes did ignore unpacked indices in a data directory from a file-system backup. As a result, the import of indices was introduced at (master) node startup time.
Taking away master role from a node is effectively making the node "blind" according to cluster state management. I never do this because I know the danger - even more risky is having only one master node. This is a classical SPOF.
I agree this effect could be discussed in the guide. Because ES does not prevent users from making such mistakes, it should be well documented.
Documenting this will help users avoid such pitfalls, however, i still do believe this to be a bug.
I performed a simple experiment:
Create a single node cluster with data master node. (as DataNode).
Add some data to the cluster. (index X is created with type Y).
Shutdown the node, and configure it to be data node only.
Create another one node cluster (with same cluster name) in a different node/instance as master only. (as MasterNode)
Start DataNode.
When the data node is started, the master detects the new node, detects a dangling index, and auto imports it.
As a result, the index (X) with all its data (Y) is alive and well. No data loss, as was intended by elastic.
That's what makes my scenario to stand out, because my dangling indices were brutally expunged
despite elastic best effort to detect such incidents.
The fact that one master is a SPOF is unrelated, in my view, in this particular case.
Fault tolerance is not always a requirement.
Besides, I do believe that this could be reproduced with more than 1 master node (when all of them see the same dangling index but a sinister index request interrupts the import process).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.