Is there any performance comparison between the default index codec and best_compression?


I'm looking into ways to optimize the disk usage of my indices on my cluster and before go on the route to remove the _source field I've decided to try and change the index codec to best_compression.

The documentation mentions that best_compression provides a higher compression rate at the expense of slower stored fields performance.

But, is there any performance comparison between default and best_compression published by Elastic? Is this still relevant after the many improvements on indexing/storage made in the last versions?

While the default codec for manually created indices are the default value, which uses LZ4, it seems that the data streams and backing indices created by Elastic Agent uses best_compression as the default codec per these template settings for example.

So, it is safe to assume that if there is any performance impact it will not be something very noticeable in most of the cases?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.