Hi,
I wanted to understand what is the compression logic followed in Elastic Search for storing of documents?
Any documentation which describes the compression factor would be really helpful in scaling ES for anticipated write load.
Hi,
I wanted to understand what is the compression logic followed in Elastic Search for storing of documents?
Any documentation which describes the compression factor would be really helpful in scaling ES for anticipated write load.
Hi,
Although this blog post is a bit old, it might give you a start to understand some of the issues at play here. I think in the end there is no such thing as a fixed number, the size used on disc will depend on the kind of data and the analysis it undergoes. So in the end you probably have to measure your storing needs with a small subset of the expected data and then extrapolate from that.
This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.
© 2020. All Rights Reserved - Elasticsearch
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant logo are trademarks of the Apache Software Foundation in the United States and/or other countries.