We are taking logs from around 50 machines throughout the day . So we need to reduce the size of the log files.
I wanted to ask if we can zip the log files once they become so many in number that they require large space .
Or any other alternate except for zipping.
You can change the compression of indices you are writing to anymore (old indices) and also use force merge.
You can just delete indices you are not using anymore (very old indices).
To automate all that, I'd suggest to look at the curator tool.
How can we archive and un-archive the data to reduce active search Database.
Explanation :
Via this query i meant to ask that if there is an option to keep the logs kind of active and passive based on the time duration . For example If i want to keep my logs of last one year in my active state i.e. Anytime available on Kibana while i want the rest of the previous logs consuming the least of my memory being in passive state.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.