How to fix hitting maximum shards open error

If you have 25GB of documents being indexed per hour, resulting in 50GB index size on disk (including replica?) that given 1.2TB of data on disk per day. Having 15-20 shards per daily index in such a case sounds reasonable, but this will depend on your data and queries. 30 days retention period however gives a total indexed data volume of around 36TB which sounds like far too much for a 3 node cluster doing heavy indexing. I would expect you to need to scale out the cluster significantly, potentially using a hot/warm architecture.

1 Like