Hi,
We are using ES to track the ERROR/FAILURE in our daily submitted CLOUD jobs. But Once the index is getting old, it is really useless to us. So is there an option, to have an index auto-recycled, once it is getting certain time old ? Or we need to handle it at our level ?
Hi Mark,
Our existing ES Framework (version 2.3 ) is doing the index on daily basis, and it has been continuous running for quite a while.
What I have noticed is sometimes, the daily index was not generated at all, and the last log message in the previous day, gives such message " one or more nodes has gone under the high or low watermark... ".
And If I was to delete some old index, then it backs to normal to generate the new ones.
So that is what I am asking for, that if we could auto-delete the old ones, if the overall is about to reach some maximum ?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.