as the watches are just stored in an index. you can treat them as such.
You can run a snapshot operation against this index as well as a restore operation. You should however ensure that watcher is stopped when restoring and make sure the index does not exist, as the restore would fail otherwise. If you use monitoring, you might also disable monitoring for that time as it tries to store watches in the .watches index.
I assume xpack.monitoring.enabled is part of elasticsearch.yml and therefor has to be propagated to all nodes in cluster, and after restore of .watches index, I'd toggle it back followed by another restart of all nodes, correct?
# curl --silent --request POST --header 'Content-Type: application/json' "$ELASTICSEARCH_URI/_snapshot/repository-gcs/2018-04-10-all/_restore?wait_for_completion=true&pretty" --data '{"indices": ".watches"}'
{
"error" : {
"root_cause" : [
{
"type" : "snapshot_restore_exception",
"reason" : "[repository-gcs:2018-04-10-all/KD9EAEDDT-a-NXvpefpYHQ] cannot restore index [.watches] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"
}
],
"type" : "snapshot_restore_exception",
"reason" : "[repository-gcs:2018-04-10-all/KD9EAEDDT-a-NXvpefpYHQ] cannot restore index [.watches] because an open index with same name already exists in the cluster. Either close or delete the existing index or restore the index under a different name by providing a rename pattern and replacement name"
},
"status" : 500
}
#
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.