Does Elasticsearch automatically compress indexes?

Dears,

We have ELK in version 7.7.1. Everyday we send to ELK logs is size about 30GB. I'm worried about FS space.
My question is does Elasticsearch automatically compress indexes?
or
Should it be forced with the correct configuration?

Best Regards,
Dan

Yes, the source is compressed by default although you can make this more aggressive through the best_compression codec.

Thank you @Christian_Dahlqvist

How it can be changed?

https://www.elastic.co/guide/en/elasticsearch/reference/current/tune-for-disk-usage.html

@warkolm thank you very much

Is there any way to check compression method per index?

For example: right now I have default compression LZ4 and today I will change it to best_compression in index template. If I good understand tommorrow will be created new index with new compression method best_compression. Is it right my thinking?

I found it:

GET /index_name/_settings

and result looks like:

{
  "index_name-2020.08.04" : {
    "settings" : {
      "index" : {
        "codec" : "best_compression",
        "number_of_shards" : "1",
        "provided_name" : "index_name-2020.08.04",
        "creation_date" : "1596499201612",
        "number_of_replicas" : "1",
        "uuid" : "ynixVMDdQtKvriYj_XkCuQ",
        "version" : {
          "created" : "7070099"
        }
      }
    }
  }
}
2 Likes

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.