Dangling index, node can't join cluster

Hi.
We have a three node cluster, all possible masters, 2 replicas. In theory, no data should be lost when we lose one node.

One node came up without the disks where the data resides. It still joined, to my surprise. I stopped it, mounted the data mountpoints, started it. It said

dangling index, exists on local file system, but not in cluster metadata

The master says

auto importing dangled indices

and worked for some time, but couldn't recover 4 indices. I had to close those.

Do you know what went wrong? Would it have been better to delete the indices on the disks that weren't available?

Hello,

Does that mean the node was part of the cluster, then it was stopped, then it was restarted but without the data folder? Also, was this node elected master at any point?

And did you delete the index from the cluster at any point, such that it was cause a re-import as a dangling index?

Its unclear to me what you mean here... was one index imported as dangling but 4 others were not?

Also, what version of Elasticsearch are you running?