Configure Delete indices based on size index lifecycle management Policy Elastic search

Version : ES 7.6.0 I have elastic search set up with filebeat for pulling up logs and rollover configured with index policy(ILM) . I have 2 phases configured : HOT and DELETE. In the delete phase for index policy there is option in kibana to delete based on number of days from rollover which i am having as 3 days. I recently faced an issue wherein load test was performed and there was sudden spike in disk usage,index policy did not delete the indices as rollover configured was 3 days . As a result there were watermark alerts after 95 % and index went into read only mode. My question is for such a scenario is there a way in index policy to delete old indices after a certain threshold of disk space is utilized instead of just configuring deletion based on number of days from last rollover of index?

1 Like

There's no disk level based trigger at this stage, check out https://github.com/elastic/elasticsearch/issues/49392 though.

Hi @warkolm,
Thanks for your reply. I am surprised that this feature is not implemented yet as this seems to me a fairly common use case . So how to deal with such scenarios wherein there could be space crunch and I wish to delete old indices to accomodate space for new ones?

You would currently need to monitor it yourself.

Curator does support this as far as I know.

On the same lines , is there a feature where i can bind ES to use certain amount of space only on a disk like for eg 50 GB to ES on 200 GB disk ?

https://www.elastic.co/guide/en/elasticsearch/reference/7.8/modules-cluster.html#disk-based-shard-allocation

So value in "cluster.routing.allocation.disk.watermark.flood_stage"
will be the max value ES will use on the node .
Is this understanding correct ?

Yes that's right.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.