How do Searchable Snapshot snapshots get cleaned up?

Hard to say really, searchable snapshots are designed to use effectively-infinitely-scalable storage like S3. Whatever you're using isn't really compatible with S3 in the sense that Elasticsearch requires:

Note that some storage systems claim to be S3-compatible but do not faithfully emulate S3’s behaviour in full. The repository-s3 type requires full compatibility with S3. In particular it must support the same set of API endpoints, return the same errors in case of failures, and offer consistency and performance at least as good as S3 even when accessed concurrently by multiple nodes.

It works on the same underlying files as regular searches, reading them in 16MiB blocks. So a single search that requires 250 random reads against a cold cache would involve 250 API calls, exceeding your request limit basically straight away. It'd also download ~4000MiB of data.

1 Like