Zipping log files once they reach a threshold of 1 Gb

Hello Everyone!

We are taking logs from around 50 machines throughout the day . So we need to reduce the size of the log files.
I wanted to ask if we can zip the log files once they become so many in number that they require large space .
Or any other alternate except for zipping.

Thank you :slight_smile:

I'm not sure your question is related to elasticsearch.

Which logs files are you talking about ?

On the process of importing files from remote machines , the system with elastic search and logstash configuration has large indices . So ,

  1. If via any of elastic search means , is there any option to zip and reduce the memory utilized by indices?
  2. If by any means , I can delete old logs which are now not useful for me ?

You can change the compression of indices you are writing to anymore (old indices) and also use force merge.
You can just delete indices you are not using anymore (very old indices).

To automate all that, I'd suggest to look at the curator tool.

1 Like

Thanks, I am checking out the curator tool.

You may also want to look at this guide in the documentation.

How can we archive and un-archive the data to reduce active search Database.
Explanation :
Via this query i meant to ask that if there is an option to keep the logs kind of active and passive based on the time duration . For example If i want to keep my logs of last one year in my active state i.e. Anytime available on Kibana while i want the rest of the previous logs consuming the least of my memory being in passive state.

You may be able to close older indices, although that means you will need to explicitly open them again if you want to search them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.