We have two Elasticsearch clusters in Kubernetes, managed by ECK (on Azure AKS), and are now trying to implement snapshots. The repository is in place.
However, when we try to actually take a snapshot, we get an error:
{"error":{"root_cause":[{"type":"snapshot_exception","reason":"[test_repo_1:test-snapshot/aPrOh1pfQi6tb7mwRgHAzw] Indices don't have primary shards [.kibana_task_manager_8.2.3_001]"}],"type":"snapshot_exception","reason":"[test_repo_1:test-snapshot/aPrOh1pfQi6tb7mwRgHAzw] Indices don't have primary shards [.kibana_task_manager_8.2.3_001]"},"status":500}
The .kibana_task_manager_8.2.3_001 index is red on both clusters. This is a major issue for us.
Is there a way to restore that index?
Or should we use a different way to backup our (rather small) indices, like exporting as json to disk?
A user reported a similar issues here, but ther is no answer.
However, this means losing our backups.
Our manager told us to use a database with proven backup / restore mechanism instead, and we could only agree.
Generally, metadata that is critical for Backup / Restore should not be kept in the system itself. In this case, that wasn't even the issue, just that this index is flaky in ECK.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.