I've deployed a test cluster that feeds from about 5 servers.
I was asked to spec out the disk requirement once 50 servers would feed it logs with ILM setup (ILM should delete logs older than 3 month or 1 year - haven't decided yet).
Problem is, I'm not sure how much space it currently takes and how to calculate how much space it would take once ILM is set up.
Is it possible to calculate how much space elastic would take for 50 servers with ILM set up for 3 month or a year? Is it possible to set logs to be archived once their older than X time?
Will give you among other details, store.size && pri.store.size
Calculate averages of your indices for a day/week/month
Then extrapolate for the time period you want.
store size is the size of the index + replicas on the disk.
Yes, for total size; it'll go up/down as segments merge, but watch over time and you can see - also turn on self monitoring in Kibana and might also be able to see at cluster level (I forget).
Elasticsearch compresses by default. You can increase it, but not sure makes much difference.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.