Not sure if this is a how to or a feature request;
I have a requirement to keep as much time series data as possible for our central logging system. Our log event rates throughout the year vary wildly. If we delete indices just over a certain age are not really a good solution to this problem, nor is deleting indices over a certain size, as some months can be larger than others.
We have reached capacity now and I typically manually delete the oldest indices to keep the cluster inside a size threshold. I've calculated my safe threshold as 25.5 TB of 40 TB total storage to allow for shard reallocation and maintaining performance on spinning disk.
The filter I envision would get the current size of the cluster and compare that value to the threshold data size in the filter. If the cluster size is greater than the threshold; then based on a index name and date pattern filter, working from the oldest and largest indices, determine what indices need to be deleted to ensure the cluster remains beneath the safe threshold .
My goal is to use Curator; a consistent and feature rich tool to do this. Upon reading the documentation, I could not see any configuration settings that could be used to do cluster size disk limits. In my Google searching; there was an issue on github which is now closed that matched my use case, perhaps this functionality is now built in but I'm unable to see how to actually accomplish my requirements.
Can anyone shed some light on this for me? I've got a batch script that will prune the older indices with size comparison in the mean time, and that's fine. I would prefer to leverage some of the other features of Curator and having a single tool to accomplish all the things would be ideal, I'm hoping I'm just not understanding how to accomplish this with the appropriate filters.