Setting index.codec to best_compression affects ElasticSearch's or Graylog's performance?

We are using ElasticSearch, Graylog, and MongoDB for storing the application logs. The disk space utilization is growing day by day, although we use a retention mechanism we need to reduce it further. Will compressing the indices affect the performance of ElasticSearch or Graylog?
Does the compression happen in the background or the main thread? Will it cause more latency on the client side (Graylog sidecar collector, GELF, etc)?

Using best compression does add a bit of additional overhead at indexing time. One way to get around this is to not use it for newly created indices, but instead wait until the index has become read-only, and then set it just before you forcemerge it down to a single segment. This saves resources as well as space and is not that difficult to manage.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.