Is there a way to reduce the amount of storage is that is used by ES? Right now I have about 30-ish servers using winlogbeat to send to logstash then to ES. An index pattern gets generated each day and by the end, each index is around 30-ish GB.
I have everything set up with the defaults, so everything is getting indexed. I saw there was something I could do with mappings, but I'm not really sure how to work with that. And I also just tried to remove fields with mutate { remove_field => }, but that doesn't really seem to do much either.
You don't have to use rollover described in the article if you don't want but the idea is:
When an index is not being updated any more, shrink it down to a single shard in a new index (using the _shrink API). Make sure its codec is set to best_compression.
Force merge into 1 segment to trigger the best compression.
The steps above are very concise, please refer to the blog post for more details.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.