Heap Size for 256 GB RAM Machine

Hi,

We are running a 50 node cluster in production. Each node has the following configuration -
Ram : 256 GB
Disk : 5.6 * 2 = 11.2 TB
Cores : 80 vCPUs
Heap : 29 GB
GC algo : G1Gc

We index roughly 2 billion documents a day and all our indices are daily indices and we keep 30 days of data in the cluster.
Each node holds around 4 TB of data with around 500 shards
Write/Read ratio : 90/10

Most of our search queries are aggregation queries and indexing requests are bulk indexing requests

With increasing load on the cluster, we are seeing lot of GC happening in the cluster. According to this blog, https://www.elastic.co/blog/a-heap-of-trouble, what heap size should we use for our cluster

  1. 29 GB heap
  2. ~50% of physical ram (~120 GB heap)

Kindly suggest what heap size we should choose?

Regards,
Nitish Goyal

I would recommend sticking with the 29GB heap but run multiple nodes per host if you are having problems with heap pressure. If you want a bit more separation you could consider using containers.

Hi,

I tried running 2 VMs per machine, but after doing that, I am seeing performance degradation in indexing. Indexing speed has reduced to 1/5th to 1/2 on different VMs compared to BM. So, splitting machine into multiple nodes isn't helping out.
BM seems to outperform by a huge margin compared to VM

Because of the performance impact, we want to stick to only one node per host, what heap settings would you recommend?

Regards,
Nitish Goyal

I am surprised by the performance impact as I have not seen this myself. Did you have the same shard count and configuration in both scenarios?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.