I am using elasticsearch as medium to store my data what is the compression
meachnism used in storing the data.
The reason why I am asking is if I have 50 GB of raw data coming in every
day how much should I plan for my hardware diskspace that's the reason I
want to know how this one works
I am using elasticsearch as medium to store my data what is the
compression meachnism used in storing the data.
The reason why I am asking is if I have 50 GB of raw data coming in every
day how much should I plan for my hardware diskspace that's the reason I
want to know how this one works
ElasticSearch uses LZF on stored fields (including _source). The storage requirements will depend on your implementation and the complexity of your data, however, planning for a 1:1 ratio +10% with compression enabled aught to put you on the right path. Otherwise, you'll have to experiment to find out exactly what you'll require.
For storage requirements, you need around twice the disk space if you
incrementally grow your index, because of additional segment merge space
overhead.
Elasticsearch uses LZF on stored fields (including _source). The storage
requirements will depend on your implementation and the complexity of your
data, however, planning for a 1:1 ratio +10% with compression enabled aught
to put you on the right path. Otherwise, you'll have to experiment to find
out exactly what you'll require.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.