Hello everyone,
I would like to backup part of my data to another server. I have in every index a field called "timestamp" and I would like to send to the backup cluster all the data older than 30 days and remove then in the primary cluster. Is it possible in an automatic way or do I have to code a script?
I havent read about time-based indices and I am afraid I dont have this option enabled because I could not find any log in the server.
Do you know where the logs are supposed to be ??
I think I can use this in order to add and delete docs massively. Do you have the same opinion?
Time based are just indexes arranged by time. So docs for today go in to an index called indexname-2016.10.10, tomorrow's go into indexname-2016.10.11 etc.
This makes deletion easier as you just delete the entire index.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.