Elasticsearch: OutOfMemory errors

Hi,

We have setup which running on ES 2.4.0 and each index has atleast 100 GB of data (index per daily), hardware specs are

vCPU : 4
RAM : 8 GB (5 GB for Elastic Search)

It was amazing as we got the report as we wanted but since last week we encountering OOM periodically. One more information that currently there are total 25 indexes are present. So we would like to know that is there any limitation / restrictions like this much memory can hold this much of data only?

Thanks in advance. Kindly guide us to solve this.

There are limits to how much a single instance of ES can hold, I'd suggest given your heap size you have reached that limit.

heres an "actual" answer

btw.. what frustrate me the most is, the people who reply, know the correct answer, but here, for a reason I just cant figure out nor understand, they give part of it, enigma or whatever else.

in short. you need to have 1gig of ram for every 16gig of data ( more or less ) ACTIVE

long story made even shorter. you could run curator on your cluster to automatically close index past 5days or less. and you should be able to fix the OOM issue.

unfortunately, to have such DATA accessible ~2.5TB you would need a hellll more ressource. keep in mind the 1:16 ratio ...

Thanks Alex..

Currently we are running this as Standalone since this setup is in startup level.

My understanding from your inputs are 1:16 is applicable for Standalone environment. Kindly correct us, if wrong. So this can help us to have proper architecture for our setup.

Thanks in Advance.

The 1:16 ratio is not a general recommendation or limitation, as the amount of data a node can handle depends on the data, query patterns, latency requirements and hardware. For use cases where the entire data set regularly need to be queried at low latency, the ratio will naturally be lower than for a lot of log analytics use cases where the most recent data need to be queryable with low latencies but longer latencies are acceptable when querying/aggregating across the entire data set.

Thanks Christian.

If there is no such limitation then how we can do capacity planning..

For example, our setup has

*. Multiple search queries (including Sorting and Aggregations) and this will happen in peak ours only.
*. Bulk insert via logstash will be 24 hours a day.
*. Search query done atleast last 10 days. (Daily based indexes with 100 GB data each).

We are ready to wait for 5 minutes also to have the data (as graph in kibana) but we need to avoid OOM which causing problem in insertion as well due to data security model of Elasticsearch.

So can you guide how we can have capacity planning on this scenario.

This will vary depending on your data and queries as well as your mappings, e.g. to what extent you are using doc_values and which version of Elasticsearch you are on. I always recommend running benchmarks on real data and load in order to determine this.

We aren't avoiding answering anything, and that last sentence is not the answer.
It's more complex than that, as @Christian_Dahlqvist points out.