My indexes are created every day like xxx-yy-mm-dd, how can I automatic remove old indexes?
I am looking for some easy way, because whole is generated automatic by tshark.
To be honest I tried with index templates to add index lifecycle policy but I don't understand a lot options :(… I just wan't delete oldest than 30days or when there will be no space on HDD etc.
By default oldest indexes are removed when there is no HDD or what will happen?
I dont have x-pack, only free basic license - so I should use curator? Looks like this is separated program. It will be better than just crontab with some curl script to delete old indexes?
Index lifecycle management it part of our Basic License. Curator offers lots of configureable option for managing indices, combined with crontab it's pretty powerful. However, for your use case deleting old indices a nice bash script using curl is also an option.
Is there some easy way to find size of indexes? For example I can look for total size of all indexes which match pattern, then start delete until it will reaches the desired value.
Is there some easy way to just delete latest indexes if size is more than 70% of free disk space/declared value? I tried to use some cli commands from tutorials/articles but looks like syntax is diffrent in new version and I have problems to create such query
Curator does have disk space based filtering, but you have to set the total threshold manually.
You also will want to exclude the Kibana and system indices. The disk space filter will then leave you with indices in excess of the specified number of gigabytes used.
"excess of 25.5TB of data" - how is this counted? it count only this indexes with prefix? whole elasticsearch database size? or how
Second question, if I have 2 actions, first for size and second for 30d old indexes, it will work fine or there is possible some problem of collisions between actions etc?
It counts the sum total of all primary and replica shards of the indices in Curator's working list (however many indices remain after any filtering).
Actions are processed in series. If you plan on doing 2 actions, I recommend doing the age-based one first. Perhaps that will clear out enough space that the size-based action will not delete anything.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.