Elasticsearch compresson

Dear all,

I collect logs from 25 application hosts. Approximately count of my applications is 100-120.

Size of my daily index ~ 100GB.

It is really problem, because my storage disk space equals to 2TB.

Is there any way to compress my indices in ES better than now.

Note, i settled best_compression option in ES config.

But it doesn't work.

Best compression will compress source more efficiently, and should save you some space. If you are using the default mappings you can optimise these depending on what you know about your data. This blog post covers this, and even if it is getting a bit old most of it is still valid. Compression efficiency also tend to at least to some extent depend on shard size, so if you have lots of small shards, you may get better compression rates by consolidating them.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.