We have setup which running on ES 2.4.0 and each index has atleast 100 GB of data (index per daily), hardware specs are
vCPU : 4
RAM : 8 GB (5 GB for Elastic Search)
It was amazing as we got the report as we wanted but since last week we encountering OOM periodically. One more information that currently there are total 25 indexes are present. So we would like to know that is there any limitation / restrictions like this much memory can hold this much of data only?
btw.. what frustrate me the most is, the people who reply, know the correct answer, but here, for a reason I just cant figure out nor understand, they give part of it, enigma or whatever else.
in short. you need to have 1gig of ram for every 16gig of data ( more or less ) ACTIVE
long story made even shorter. you could run curator on your cluster to automatically close index past 5days or less. and you should be able to fix the OOM issue.
unfortunately, to have such DATA accessible ~2.5TB you would need a hellll more ressource. keep in mind the 1:16 ratio ...
Currently we are running this as Standalone since this setup is in startup level.
My understanding from your inputs are 1:16 is applicable for Standalone environment. Kindly correct us, if wrong. So this can help us to have proper architecture for our setup.
The 1:16 ratio is not a general recommendation or limitation, as the amount of data a node can handle depends on the data, query patterns, latency requirements and hardware. For use cases where the entire data set regularly need to be queried at low latency, the ratio will naturally be lower than for a lot of log analytics use cases where the most recent data need to be queryable with low latencies but longer latencies are acceptable when querying/aggregating across the entire data set.
If there is no such limitation then how we can do capacity planning..
For example, our setup has
*. Multiple search queries (including Sorting and Aggregations) and this will happen in peak ours only.
*. Bulk insert via logstash will be 24 hours a day.
*. Search query done atleast last 10 days. (Daily based indexes with 100 GB data each).
We are ready to wait for 5 minutes also to have the data (as graph in kibana) but we need to avoid OOM which causing problem in insertion as well due to data security model of Elasticsearch.
So can you guide how we can have capacity planning on this scenario.
This will vary depending on your data and queries as well as your mappings, e.g. to what extent you are using doc_values and which version of Elasticsearch you are on. I always recommend running benchmarks on real data and load in order to determine this.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.