Is there a way to configure an index so that every document gets deleted once he is more than 90 days ?
I saw a lot of topics that propose to use a delete query in a crontab job but that sounds very inefficient.
Some other propose to add a lifecycle policy that create a shard every day and delete every shard older than 90 days, this sounds good but I'd like to know if there isn't something built in.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.