How to prooceed with a data node with corrupt disk file system

I would really appreciate help on the correct course of action. The setup is 3 ELK nodes which have all roles.
No shard replication is done. Node 2 experienced a failure on the disk which contains the data folder. An old copy (about a month) of that folder exists, and I know it would not be sufficient to copy the data in.

My question is, what is the correct course of action at this point which would return the stack to normal operation mode:

  1. install a new disk and just launch the node? By a strike of luck, that was our least important data.
  2. install the new disk and copy the old data and see if it can recover that data?

Also, would doing option 1, while launching an experimental node on which the data folder is mounted and restore whichever recoverable data and re-index them remotely to the original cluster?

Option 2 will work, but you will lose the data. You can try 1, I've seen it work, but it's not guaranteed.

You are best off using snapshot and restore here.

1 Like

Thank you for the reply warkolm!

Just to make sure I understood that correctly because I suspect you might flipped the numbers.

You have a birthday cake next to your name! happy birthday!

Yes, they are the wrong way around sorry.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.