.kibana_task_manager index red in ECK , no snapshot possible

We have two Elasticsearch clusters in Kubernetes, managed by ECK (on Azure AKS), and are now trying to implement snapshots. The repository is in place.
However, when we try to actually take a snapshot, we get an error:

{"error":{"root_cause":[{"type":"snapshot_exception","reason":"[test_repo_1:test-snapshot/aPrOh1pfQi6tb7mwRgHAzw] Indices don't have primary shards [.kibana_task_manager_8.2.3_001]"}],"type":"snapshot_exception","reason":"[test_repo_1:test-snapshot/aPrOh1pfQi6tb7mwRgHAzw] Indices don't have primary shards [.kibana_task_manager_8.2.3_001]"},"status":500}

The .kibana_task_manager_8.2.3_001 index is red on both clusters. This is a major issue for us.

Is there a way to restore that index?
Or should we use a different way to backup our (rather small) indices, like exporting as json to disk?

A user reported a similar issues here, but ther is no answer.

We ended up using Elasticsearch only as a temporary storage and doing the data security part in another system.

Technically, we were able to make the the index green - but empty - with

curl -XPOST "https://$HOST/elasticsearch/_cluster/reroute?pretty" -H 'Content-Type: application/json' -d'
{
    "commands": [{
        "allocate_empty_primary": {
            "index": ".kibana_task_manager_8.2.3_001",
            "shard": 0,
            "node": "any-old-node",
			"accept_data_loss":true
        }
    }]
}'

However, this means losing our backups.
Our manager told us to use a database with proven backup / restore mechanism instead, and we could only agree.

Generally, metadata that is critical for Backup / Restore should not be kept in the system itself. In this case, that wasn't even the issue, just that this index is flaky in ECK.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.