I'm asking, because I'd like to use it, but not if it will cause problems down the road if it is removed. E.g. if I setup my index to use best_compression and then in 2.xxxxx it is removed what would I have to do to overcome that? Reindex all data in order to upgrade?
It would be rather absurd that future versions could not read older index segments. The compression is in Lucene, and since ES 2.x the ES team spends lot of effort to check for backward compatibility in this area. It would be like gzip was suddenly unable to decompress data from 1999.
Here is my guess: the "best compression" seems experimental not because it will be removed, but just because there is not much gain in relation to the standard compression ratio, and it won;t become default setting. The CPU cost seems not to justify the little saved place.
You should test it and report your findings, maybe it can be worth to keep the setting, against common expectations.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.