We recently moved to the Elastic Stack on the Elastic Cloud but we have a very minimal setup and would like to ask for guidance and recommendations as to how to go about archiving logs without doing anything unnecessarily expensive.
So we have 1 node that does all the Elastic Search processing, it has 2 GB of RAM and 60 GB of space. We're already using 40 GB and would like to know how we can go about archiving the older logs instead of just deleting them. What would be the best way to go about this?
Thanks for the reply. I noticed now the options of adding a warm / cold cluster in the cloud. I noticed that the costing on a cold cluster is quite large compared to the hot node. If I don't query old data enough, will that mean I pay less overall?
I don't suppose I can run a task on ILM or something on Elastic Cloud that can do the re-indexing for me while enabling the best_compression option? All I can think of is actually running a long-running script to do that for me.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.