Recently, I stumbled unto a situation where all my indices were deleted without a warning.
I have a 3 node cluster, with elastic 1.5.2, with oracle jdk 8.45, where all nodes are master eligible and data nodes (default master-data settings).
Then, I added a fourth master eligible node with node.data set to false. The old 3 nodes were set to be data nodes only.
I started the new master node first, the new node was never connected before to a live cluster, so it had no cluster metadata. Then I started the 3 data nodes.
A couple of minutes after that (no more than 2 or 3 minutes), I discovered that my indices exist but are completely empty. In the master log I could see the log message that states that there are dangling indices and they are going to be imported. Something similar to this:
dangling index, exists on local file system, but not in cluster metadata, scheduling to delete in [2h], auto import to cluster state [YES]
but without the delete in 2h part.
So, I received no warning at all and my indices were gone.
Its important to note that I had several Logstash instances running elsewhere sending index requests to the data nodes while they were starting up.
What I think might have caused the data loss is that before the master node could complete the import of indices, a request, or several requests for the same indices reached the data nodes, created new empty indices, and for some reason the import failed because of that.
Does that makes sense?