Deleting Snapshots very slow after update

Has anyone else had any issues deleting snapshots after upgrading? I upgraded from 7.5.1 -> 7.9.0, and now REST calls to DELETE /snapshot/repo-name/snapshot-name take exactly 3 minutes to return. If I call GET /snapshot/repo-name/_all a few seconds after the DELETE call, snapshot-name is already gone. I'm using the S3 repos.

Tangentially, I noticed the new setting "snapshot.max_concurrent_operations": 1000 when I hit GET /_cluster/settings?include_defaults=true&pretty. However, I am still getting this error when I try to take a snapshot while a delete is in-progress:

[HTTP/1.1 503 Service Unavailable] 
{
  "error": {
    "root_cause": [
      {
        "type": "concurrent_snapshot_execution_exception",
        "reason": "[my-repo:my-snapshot] cannot snapshot while a snapshot deletion is in-progress in [SnapshotDeletionsInProgress[[other-repo:other-snapshot/ITFOKPFrRnGSlyurFkMY0g]]]"
      }
    ],
    "type": "concurrent_snapshot_execution_exception",
    "reason": "[my-repo:my-snapshot] cannot snapshot while a snapshot deletion is in-progress in [SnapshotDeletionsInProgress[[other-repo:other-snapshot/ITFOKPFrRnGSlyurFkMY0g]]]"
  },
  "status": 503
}

"reason": "[my-repo:my-snapshot] cannot snapshot while a snapshot deletion is in-progress

This is as expected.

Version 7.9 allows multiple snap operations at once, and deletes will wait for running snaps, but once a delete starts, new snaps are blocked. See code commit and comments. Seems more improvements are coming - part of relevant comment is:

"Delete operations wait for snapshot finalization to finish, are batched as much as possible to improve efficiency and once enqueued in the cluster state prevent new snapshots from starting on data nodes until executed. We could be even more concurrent here in a follow-up..."

The concurrent operations was added in V7.9 but just as a safety upper limit.

On the version change, no idea, but the snap system does track versions of things, so maybe the version change is making it go through and work harder. How I wish for 3 minute S3 snaps, though; we only have a few TB of data in one main cluster and it's 15-20min at least.

Note that V7.8+ allows wildcard deletes, which may be useful to you.

Hi @Andrew_DS

this should not be happening in a pure 7.9 cluster. Maybe you didn't upgrade all your nodes to 7.9 and an old version node in your cluster is blocking the new functionality from running?

take exactly 3 minutes to return.

This is by design. Until you have deleted all pre v7.6 snapshots from your repository we wait for 3 minutes on S3 to prevent potential repository corruption due to S3's eventually consistent nature. Starting in v7.6 we changed the snapshot repository format in a way that makes this unnecessary but that new format is only used once all pre 7.6 are deleted from the repository.

How I wish for 3 minute S3 snaps, though; we only have a few TB of data in one main cluster and it's 15-20min at least.

Oh, no, our snapshots easily take 30-90 minutes. I'm just used to the snapshot deletion call returning fairly quickly.

This is by design. Until you have deleted all pre v7.6 snapshots from your repository we wait for 3 minutes on S3 to prevent potential repository corruption due to S3's eventually consistent nature.

Oof. Okay this is going to mean that I probably want to refactor my backup system to submit all deletes at once, rather than snapshotting-and-deleting per customer.

Thanks for the prompt responses. It is good to know that this is expected behavior, rather than something in my cluster being in a bad state.

Armin, am I reading the code & code comments wrong on this as seems to clearly say you cannot execute a snapshot while a delete is running (the reverse not being true)? That's in V7.9 code (in fact the whole concurrent operation seems to be added in 7.9).

That should only be the case in the mixed version cluster code path. Once all nodes in the cluster are on 7.9 or newer a different path without such concurrency checks is chosen here: https://github.com/elastic/elasticsearch/blob/v7.9.0/server/src/main/java/org/elasticsearch/snapshots/SnapshotsService.java#L1554

Okay, thanks.

Seems if version is V7.9+ it'll just queue the deletes until ongoing snaps complete (or abort them if the delete applies); according to the package info file - I had read comments to mean not allow (vs. not start).

But as you note, must have older nodes around to hits the other path that doesn't allow the concurrency.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.