Snapshots recovery

I have a special problem. I can't find a solution because I have problems to explain it.
I'm using ELK and I create a new index every day. In order to free space on my production system, I'm creating compressed snapshots, and then I retrieve those snapshots on my local system. Once the snapshots have been recovered I delete them from production. Everything works fine except for one detail :

  • The snapshots are not visible on my local system. The files are here, but elastic doesn't see them, and I known why : It's because I synchronize the snapshot with rsync. Rsync pulls the new snapshots, and pulls as well the repository metadatas.
    When I delete the snapshot on my production system, the files are deleted from the production disk, and the snapshot properties (ex : snapshot list) are updated. Then, when I execute rsync, my local repository properties are updated and the snaphot disapears (but the files remain).

I'll give you an exemple :
On my production I create a snapshot called snapshot.2018-05-21 containing the logstash index logstash.2018-05-21. Then I rsync the repository : all the files are transfered to my local drive.
The next day, I create a snapshot called snapshot.2018-05-22 containing the logstash index lostash.2018-05-22. I delete the snapshot snapshot.2018-05-21. Then I rsync the repository : all the new files are transfered, but the old files are not deleted.
Once this operation is finished, on my local drive I have the snapshot files for 2018-05-21 and 2018-05-22, but my local repository thinks that the 2018-05-21 was deleted (because I deleted it on production, and the repository properties has been synchronized)

Here is my question : Is there an elastic command to help me ? I need to re-scan my repository, on my local system, in order that elastic detects that my old snapshot (2018-05-21) still exists?

I hope I was clear... Maybe I'm not using your tool right. At the end, what I need is a big backup on my local system, with 3 years of history. On my production, I only need 6 month.
Because of network restrictions, my servers can't speak together. I need to work with SSH.

Is there anyone to help me with my problem? Should I add some explanations?
Thanks for your help, I really don't know what to do.

No, I can't think of a way to do that.

I think it'd work better to create a new repository every so often (e.g. monthly). Then, rather than deleting individual snapshots from your single repository, you can delete an entire repository when it's no longer needed.

Right, thanks... I should have thought of this solution myself. I'm alone on this project and I seriously need more brain storming!
Thanks. I'll do what you suggest and delete big repositories instead of small snapshots... But still, maybe you should think about this feature. I don't think that I'm the only one trying to do things like that... And even more. Let's imagine that there is a disk problem and the repository is corrupted. It could be a great idea to re-scan it...
Have a nice day :smiley:

Oh, and just one last question... Now that I'm in this situation, is eveything lost? I mean, I have nearly 12 month of files that are invisible. Is there a way to cheat (rename a folder, or change a parameter by hand) Maybe I could create fake snapshots and copy the file in it?

I would expect most people to allow their production clusters to write directly to the proper location rather than to try and do what you're doing with rsync. Can you explain the benefits of your setup?

I can't think of one, unless you've got some way of rolling everything in the repository back to an earlier version.

The price. My production system costs a lot and my local system is free. Everything is backed up on bands... We pay a provider for our production and it costs a lot. We don't need this kind of service for logging production logs.
Moreover, elastic is a little slow when there is a lot of data, and when we need big statistics, we prefer to execute it on a local, non critical, machine. If the query is too big and the machine crashs, we just restart it...
Thanks a lot for your help.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.