I am running two separate instances (x and y) of ES.
I am treating y as the "source of truth" for saved objects. So x will snapshot to x's s3 bucket (as a backup plan if y goes stale) but always restore from y's s3 bucket using Curator.
y will snapshot to y's s3 bucket and never restore (at least not with Curator).
For whatever reason, each of my ES instances has both .kibana and .kibana_1 indices and .kibana is an alias for .kibana_1 which complicates things for the restore.
I am running Curator (as a k8s CronJob) to snapshot like the following:
actions:
1:
action: snapshot
description: Creates a snapshot of .kibana indices
options:
ignore_empty_list: True
repository: backup-x # y is backing up to backup-y
name: 'kibana-%Y%m%d%H%M%S'
wait_for_completion: True
max_wait: 3600
wait_interval: 10
filters:
- filtertype: pattern
kind: prefix
value: '.kibana'
And for x, I have set up the s3 repository to point to the y's snapshot s3 bucket and am running Curator (as a k8s CronJob) to restore like the following:
actions:
1:
action: restore
description: Restores the .kibana indices from the latest snapshot with state SUCCESS
options:
ignore_empty_list: True
repository: backup-y
name:
indices: ['.kibana_1'] # I have tried ['.kibana'] and ['.kibana', '.kibana_1'] as well
wait_for_completion: True
max_wait: 3600
wait_interval: 10
filters:
- filtertype: state
state: SUCCESS
All I want is to overwrite the Kibana saved objects of x with the Kibana saved objects of y's latest snapshot but the restore keeps failing with the following error:
kubectl logs -n <my-namespace> elk-elasticsearch-curator-restore-1583451000-9b98d
2020-03-05 23:31:23,843 INFO Preparing Action ID: 1, "restore"
2020-03-05 23:31:23,851 INFO Trying Action ID: 1, "restore": Restores the .kibana indices from the latest snapshot with state SUCCESS
2020-03-05 23:31:25,512 INFO Restoring indices "['.kibana_1']" from snapshot: kibana-20200305010010
2020-03-05 23:31:25,580 ERROR Failed to complete action: restore. <class 'curator.exceptions.FailedExecution'>: Exception encountered. Rerun with loglevel DEBUG and/or check Elasticsearch logs for more information. Exception: TransportError(500, 'snapshot_restore_exception', '[backup-y:kibana-20200305010010/ZeYoq_gCTOuLt5kPFFHqOQ] cannot restore index [.kibana_1] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name')
What am I missing? Do I need to delete the indices before restore? Or do I need to close the indices before restore? I just assumed Curator would do this seamlessly.