Closed indices do not release shards

I am having a problem where closed indices are not releasing shards. I am not sure if this is a configuration problem or a change with respect to previous versions (I am using 7.17.0 right now), because I have been able to replicate the same issue in different systems (all single node setups).

Usually, before I could close an index in Elastic and it would release the shards: the number of open shards would decrease, and when querying the index with the Cat API, the "pri" column would become zero. But right now, closed indices are not releasing the active shards. The _cat/indices endpoint is marking them as green/close, the number of active shards is not decreasing and the allocation API is saying that the indexes have allocated shards. Reopening and closing the shards is not doing anything, and I have no errors on logs nor on the responses. Funnily enough, despite being above the limit of maximum shards, we can create new indexes without problem.

Is there any way to actually close and deallocate the shards? Or is elastic counting shards for closed and open indexes differently? In that case, would it be possible to access those counts? Because right now our monitoring tools are alerting on max number of shards exceeded, and we would like to update those alerts if there is a different behavior now.

Welcome to our community! :smiley:

I don't know the details, but I do know we made some changes to the way that closed indices are handled under the hood in 7.X. This might be one of them, where we still report them as existing, but they don't count against any limits in the cluster.

Hopefully someone that knows for certain can stop in and comment :slight_smile:

Really only by deleting them. Closed indices will consume disk space until deleted, and they are still properly replicated in case you lose some of the nodes in your cluster. You shouldn't really have long-term closed indices: either delete them (after taking a snapshot if you want to bring them back) or else just leave them open.

Disk space and replicas are not an issue right now, that's why we used to just close indexes as it was the easier way to manage long-term data that might need to be brought back. Right now the main issue is knowing what's happening with those shards and mostly fixing the monitoring tools. I was looking in the stats but I can't see any place where we can monitor the shards that actually count towards the max shard limit.

Thank you both!

I see, yes, I can't think of a stats API that breaks things down by open/closed state. I could see value in that breakdown in a cluster with long-term closed indices but this isn't how Elasticsearch expects closed indices to be used.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.