I bumped against the cluster shard limit recently, and am trying to find useful ways to manage this.
I'm dealling with logs as my primary use-case, and some of these logs are very high volume. By default, I've generally set my templates to use 4 shards and 1 replica, although in an effort to optimise indexing speed I've dropped some of those to 0 replicas initially. I use curator to manage index lifecycle, which includes hot/warm migrations and deleting indices when they get old enough.
What I don't currently do is to close indices, and it occurs to me that a valid strategy to persue might be to allow replicas older than N days to be closed (reducing search performance), but allow them to be re-opened if they need to be promoted to primaries (maintaining high-availability).
The goal is to maximize number of days of indices, but at the same time reduce pressure in shard count, without changing the essential architecture of the cluster (there will be scope for rearchitecting it in future to take advantage of new features such as Data Streams, Rollovers, Data Tiers and Searchable Snapshots perhaps).
Is Elasticsearch, given the constraints above, capable of such a policy, and how would this be implemented? I suppose this question would be equivalent to how you do this with the Basic tier.