I would like to implement a snapshot policy but I have a question.
Snapshots are incremental, so the first snapshot contains all datas, then the following contains the difference.
If I delete the first snapshot, how I can restore the full datas ?
I'm using curator for the snapshots.
Thank you for your answers.
If you delete a snapshot, but it contains data that is needed for other snapshots, then that data is not deleted. Data is only deleted from the repository once it is not needed for any remaining snapshots.
Thank you for your reply.
I have another question about restoring in one other cluster.
I have two clusters, one for production, another for test.
I want to do a snapshot af the production indices, and restore in the test cluster.
On both clusters, I have the same path.repo, is that OK ?
But the id user and id group are not the same between the clusters environnement, is it OK ?
Or I have to create 2 shared path for the snapshot and copy all files in one path to another path ?
Yes, it should work to allow your test and production clusters to access the same repository, as long as only one of them has read-write access. If the two clusters run as different users then your production user will need read-write access to the repository and your test user will need read-only access. Also, you should set
"readonly": true when setting up the repository in your test cluster.
The docs say:
If you register same snapshot repository with multiple clusters, only one cluster should have write access to the repository. All other clusters connected to that repository should set the repository to
Thank you David, it works.
I have now the same shared folder on both environnments.
I have done a snapshot of the .kibana-6 index on production, I can see it in the test environnement.
But when I run a restore action on my test server, I see 2 indices : .kibana in green status and .kibana-6 in yellow status.
the .kibana is from the test server, the .kibana-6 commes from the production server and I have 1 unassigned shard.
How I can do a "perfect" restoration of this indice (the goal is to have the saved searches, dashboard from the production in the test server) ?
And why I have .kibana-6 ? Is it because we have migrate from kibana 5 to kibana 6 8 months ago ?
What does the cluster allocation explain API (
GET _cluster/allocation/explain) say?
Yes, newer versions of the
.kibana index have a version number in their names.
Sorry fot the late answer.
Now I have no error or warning messages, I have the .kibana index (with 1 document from the test server) and the .kibana-6 index (with 449 documents from the prod server) in green status.
I was thinking the restore action will erase the indices already present because the .kibana is creating itself when I delete him.
Maybe I have to restore in a blank environnment or I have to rename the .kibana-6 in .kibana when I do the restore action.
No, a restore will not delete indices that are already present.
OK, so what is the best way to save and restore the .kibana index ?
Because this indice, if I well understood, contains all saved searches, dahsboard etc...
I do a snapshot of this index in production, but when I want to restore it in test environnement, it's not working, I don't see my dashboard, visualisation etc....
I think it's because in my test environnment, the default name is .kibana and in production, it's .kibana-6.
How I can do ?
Sorry, I don't know a lot about Kibana so I'd rather not guess how this is normally done. I suggest asking in the Kibana forum as there will be better-informed people there.
thank you David, your answers helped me a lot !
I will ask in Kibana forum my questions.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.