JVM Heap size larger than 32 GB

We have several machines with 512 GB of RAM and I wanted to know if we can set JVM heap size for Elasticsearch larger than 32 GB (up to 256 GB).

This page says we should keep the heap size below the threshold for compressed ordinary object pointers (oops).

However, I've found posts like this, which suggest setting JVM heap size beyond 64 GB.

Is that something you recommend to avoid or is it still acceptable? What should I be aware of if we want to use up to 256 GB or RAM?

In general, what do you recommend for large clusters that require a total heap size larger than 3 TB?
We have a limited number of bare metal machines with up to 512 GB of RAM and we currently run multiple Elasticsearch data nodes on each with 31 GB of JVM heap, but sharing I/O between nodes on the same machine can be the bottleneck sometimes.

1 Like

What version are you running?

8.6.2

Ok so it uses G1GC which helps with larger heaps.

But putting more data on a single node doesn't negate that load, it just consolidates it.

Honestly, give it a go. Test your load profile. See what happens and if it works for your situation.

I understand the load would be consolidated, but is there a reason not to do that?
I believe I/O sharing problem goes away by using a single node on each bare metal machine rather than multiple nodes competing for I/O bandwidth.

The flip side might happen, you might end up with a node that is under utilised.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.