Red indices when restoring from snapshot which contains indices in cold phase

Hi All,

I got a snapshot in s3 from cluster A (7.11.x) and tried to restore it in cluster B (7.13.x) and noticed that indices in cold nodes e.g. index-data-2021.03 were restored as e.g. restored-shrink-index-data-2021.03 which was red (0 documents, 0 bytes).
What is the problem here and how can I overcome it to restore the index?

Their ILM policy is missing in the new server (I would assign a new one).

Thank you in advance for you answers

The cluster allocation explain API answers questions like this. What does it say?

Most of the instances report that they cannot host this index as it is meant for cold tier.
The cold tier instance report that a related to the ILM policy, SLM policy snapshot repository is missing. I understand that I have to register the same repo with the same name.

"deciders" : [
        {
          "decider" : "restore_in_progress",
          "decision" : "NO",
          "explanation" : "shard has failed to be restored from the snapshot [migration_data:migration_index_data_06-2021.07.20-q2cdf65kr4-7dlltfhorab/NNgW2TOXQCuOgbF9NnPCSw] because of [failed shard on node [NMax3YN1RV2w-ooGQOexVQ]: failed to create shard, failure RepositoryMissingException[[index-data_slm_backups_repo] missing]] - manually close or delete the index [restored-index-data-2021.05] in order to retry to restore the snapshot again or use the reroute API to force the allocation of an empty primary shard"
        }
      ]

I created the same repo with the same name and managed to "restore" these indices but I get ILM error

"illegal_argument_exception: index [restored-shrink-index-data-2021.23] in snapshot [found-snapshots/tZZAEJD8SHa_fAc4mew9-w:2021.07.27-restored-shrink-index-data-2021.23-ilm_policy_index_data_existing-x7cpnk7nrwypb8vgohwfw] is a snapshot of a searchable snapshot index backed by index [shrink-index-data-2021.23] in snapshot [index-data_slm_backups_repo/:2021.06.24-shrink-index-data-2021.23-ilm_no_delete_existing-w3sddtalqpoayxifxpcbzq] and cannot be mounted; did you mean to restore it instead?"

Yeah, I would like to restore it and apply new ILM policies ...

Should I use some other options in the restore request?

Looks like you tried to mount these indices rather than restoring them?

I used

POST /_snapshot/migration_data/migration_index_data_02-2021.07.16-hdydzipwqjk2gp3gvkvghq/_restore
{
	"indices": "restored-shrink-index-data-2021.23",
	"ignore_unavailable": true,
	"include_global_state": false
}

Should I use "include_global_state": true ?

include_global_state

(Optional, Boolean) If false , the global state is not restored. Defaults to false .

If true , the current global state is included in the restore operation.

The global state includes:

  • Persistent cluster settings
  • Index templates
  • Legacy index templates
  • Ingest pipelines
  • ILM lifecycle policies
  • For snapshots taken after 7.12.0, data stored in system indices, such as Watches and task records, replacing any existing configuration (configurable via feature_states )

I don't think it's possible to get a message containing the string cannot be mounted; did you mean to restore it instead from the API call you quoted.

Yes, it is strange as my intention was to restore. Is it possible that applying another ILM policy, a mount call is made by ILM?

I think that the problem is that as it has a searchable snapshot (Frozen tier) and I want to copy it to another cluster where the specific ILM policy is missing/replaced with another when I use the restore functionality from the UI or the API, the newly restored indices display this error.

Should I change the ILM policy not to contain searchable snapshots in the existing cluster?
How could I copy them without such behavior?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.