I'm in a bit of a pickle. A lot of my indices are gone after an upgrade.
I have a Logstash pumping in data into an alias.
The alias still pointed to the same index name after upgrading, but the old data was gone. It has just made a new index.
The indices were rolled every 2 days, and all the previous indices are gone as well.
Not all indices are gone, but I can't find any managed indices from the past few months.
I'm fairly confident that the raw files are still there, since I have about 2.5 TB of index data distributed across my 3 nodes.
I don't think anything was deleted.
Is there any way to recover indices from disk?
Any help is greatly apprecieated!
I found backups of the data directories of each node. They are full filesystem backups. Now I'm wondering which method would be best for restoring those backups.
First I thought I could simply copy all the files (nodes directory) over, but now I'm worried what will happen as the index names will probably clash - at least for the newest index of the lost data/first index of new data..?
Another option I see would be to start up a single-node ES cluster with the combined data directories of the 3 backups and then transfer the old indices through snapshots.
Or maybe it's better to make snapshots of the new data, fully restore the backups, and then restore the snapshots.
What would you suggest?
Any help is greatly appreciated!
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.