Sleeping for [3m] after modifying repository

We started receiving the Sleeping for [3m] after modifying repository .... because it contains snapshots older than version [7.6.0] in production after upgrading from 7.5.2 to 7.10.2, so we created new repos and started snapshoting there.

But, we're still seeing these messages for the new repos:

[2021-03-25T01:23:23,594][INFO ][o.e.r.s.S3Repository     ] [my-master-node] Sleeping for [3m] after modifying repository [my_new_repo] because it contains snapshots older than version [7.6.0] and therefore is using a backwards compatible metadata format that requires this cooldown period to avoid repository corruption. To get rid of this message and move to the new repository metadata format, either remove all snapshots older than version [7.6.0] from the repository or create a new repository at an empty location.

Our new repos only contain snapshots created by ES 7.10.2 though.

Why can it be we are still receiving these messages?

Thanks.

Can you share GET _cat/nodes?h=n,v and GET _snapshot/my_new_repo/_all?

Hello,

For the GET _cat/nodes?h=n,v the output was too long, and all lines said 7.10.2 except this one:

my-kibana-node       7.5.2

For the GET _snapshot/my_new_repo/_all:

{
  "snapshots": [
    {
      "snapshot": "REDACTED",
      "uuid": "REDACTED",
      "version_id": 7100299,
      "version": "7.10.2",
      "indices": [
        "my_index_name-2021-03-25"
      ],
      "data_streams": [],
      "include_global_state": false,
      "state": "SUCCESS",
      "start_time": "2021-03-25T01:23:07.560Z",
      "start_time_in_millis": 1616635387560,
      "end_time": "2021-03-25T01:23:22.365Z",
      "end_time_in_millis": 1616635402365,
      "duration_in_millis": 14805,
      "failures": [],
      "shards": {
        "total": 1,
        "failed": 0,
        "successful": 1
      }
    },
    {
      "snapshot": "REDACTED",
      "uuid": "REDACTED",
      "version_id": 7100299,
      "version": "7.10.2",
      "indices": [
        "my_index_name-2021-03-25"
      ],
      "data_streams": [],
      "include_global_state": false,
      "state": "SUCCESS",
      "start_time": "2021-03-25T05:24:05.412Z",
      "start_time_in_millis": 1616649845412,
      "end_time": "2021-03-25T05:24:54.029Z",
      "end_time_in_millis": 1616649894029,
      "duration_in_millis": 48617,
      "failures": [],
      "shards": {
        "total": 1,
        "failed": 0,
        "successful": 1
      }
    },
    {
      "snapshot": "REDACTED",
      "uuid": "REDACTED",
      "version_id": 7100299,
      "version": "7.10.2",
      "indices": [
        "my_index_name-2021-03-25"
      ],
      "data_streams": [],
      "include_global_state": false,
      "state": "SUCCESS",
      "start_time": "2021-03-25T11:12:09.091Z",
      "start_time_in_millis": 1616670729091,
      "end_time": "2021-03-25T11:12:29.498Z",
      "end_time_in_millis": 1616670749498,
      "duration_in_millis": 20407,
      "failures": [],
      "shards": {
        "total": 1,
        "failed": 0,
        "successful": 1
      }
    },
    {
      "snapshot": "REDACTED",
      "uuid": "REDACTED",
      "version_id": 7100299,
      "version": "7.10.2",
      "indices": [
        "my_index_name-2021-03-25"
      ],
      "data_streams": [],
      "include_global_state": false,
      "state": "SUCCESS",
      "start_time": "2021-03-25T15:49:16.013Z",
      "start_time_in_millis": 1616687356013,
      "end_time": "2021-03-25T15:50:24.037Z",
      "end_time_in_millis": 1616687424037,
      "duration_in_millis": 68024,
      "failures": [],
      "shards": {
        "total": 1,
        "failed": 0,
        "successful": 1
      }
    }
  ]
}

Is it possible Kibana being in an older version is causing this repo to require the 3 minute delay?

Yes, that would explain it.

So does the metadata mentioned in the error message depend on the lowest version of any node in the cluster? not on the versions of the indices being backed up to this repo?

(We are not taking backups of any index in this node to this repo)

Yes, although the message doesn't mention it. You have a mixed-version cluster, which Elasticsearch interprets to mean that there is an ongoing upgrade, and we can't switch to the new wait-free behaviour until the upgrade is complete.

1 Like

Thanks a lot!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.