We are struggling with disk space issues on the Elasticsearch nodes. I came across the best_compression option and the suggested way is as follows using dev tools:
1. Close all indices:
POST _all/_close
2. Apply best_compression to all
PUT _all/_settings -d '{"index.codec" : "best_compression"}'
3. Open all indices:
POST _all/_open
Please guide if this is the correct way to achieve best compression. We have ELK 7.6.2 stack
In order for the compression to take effect I believe you need to forcemerge the indices down to fewer segments as only newly created/merged segments will get the improved compression applied.
1. Close all indices:
POST _all/_close
2. Apply best_compression to all
PUT _all/_settings -d '{"index.codec" : "best_compression"}'
3. Open all indices:
POST _all/_open
4. Force merge all indices
POST /_forcemerge
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.