I want to delete old indexes based on total disk consumption (Elastic indexes + Other DBMS stored files + System Log + etc). However, based on the curator documentation, I think the curator can only delete old indexes when the indexes disk consumption goes higher than a threshold.
Yeah, there’s no way for Curator to know how to sum those things up. Additionally, the threshold Curator uses is for total disk usage across all nodes, regardless of how many shards to be deleted might be on a given node. Deleting by disk space is a use case, but it is rather unpredictable because Elasticsearch can allocate shards in seemingly random allocation, putting that disk space more on one node than another. This results in extra post-delete shard allocation as Elasticsearch will then try to ensure that shards are evenly allocated (total shard count per node).
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.