Delete old indexes

My indexes are created every day like xxx-yy-mm-dd, how can I automatic remove old indexes?

I am looking for some easy way, because whole is generated automatic by tshark.

To be honest I tried with index templates to add index lifecycle policy but I don't understand a lot options :(… I just wan't delete oldest than 30days or when there will be no space on HDD etc.

By default oldest indexes are removed when there is no HDD or what will happen?

Hi cyberzlo

So you already mentioned Index Lifecycle management

Here's the doc in detail from ES side

In Kibana you would need to define a delete phase

However, you current setup sounds you'd need a custom solution, did you have a look at curator? it allows you to build custom setups for your purpose

hope this helps,

I dont have x-pack, only free basic license - so I should use curator? Looks like this is separated program. It will be better than just crontab with some curl script to delete old indexes?

Index lifecycle management it part of our Basic License. Curator offers lots of configureable option for managing indices, combined with crontab it's pretty powerful. However, for your use case deleting old indices a nice bash script using curl is also an option.


Is there some easy way to find size of indexes? For example I can look for total size of all indexes which match pattern, then start delete until it will reaches the desired value.

in curator there are several filters that allow you to select indices by size, age, etc.:

I've built a solution a log time ago on top of it, and it was very powerful, so it should fit you use case

Is there some easy way to just delete latest indexes if size is more than 70% of free disk space/declared value? I tried to use some cli commands from tutorials/articles but looks like syntax is diffrent in new version and I have problems to create such query

Curator does have disk space based filtering, but you have to set the total threshold manually.

You also will want to exclude the Kibana and system indices. The disk space filter will then leave you with indices in excess of the specified number of gigabytes used.

I probably found your old answer with config Curator: Delete oldest indices based on ES cluster size - is this still fine or some options should be changed?

"excess of 25.5TB of data" - how is this counted? it count only this indexes with prefix? whole elasticsearch database size? or how :slight_smile:

Second question, if I have 2 actions, first for size and second for 30d old indexes, it will work fine or there is possible some problem of collisions between actions etc?

It counts the sum total of all primary and replica shards of the indices in Curator's working list (however many indices remain after any filtering).

Actions are processed in series. If you plan on doing 2 actions, I recommend doing the age-based one first. Perhaps that will clear out enough space that the size-based action will not delete anything.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.