How to optimise heap usage on elasticsearch nodes?

We use elasticsearch for analytics and ingest around 3 billions events a day. For this, we create daily indices and each index has a ttl of 30 days. We flush out older indices through a cron job

Cluster Size : 200 nodes

Each node has below configuration:
Ram : 256 GB
Heap : 30 GB
Cores : 40
180-200 shards per node

Few of our daily indices have 80 shards each and each shard contains 20GB to 30 GB of data

We regularly see our heap usage remains at ~75% usage

  1. If I reduce shards per index from 80 to 60, which would increase size of each shard from 30 to 40 GB, will it reduce pressure on heap?
    I am assuming each node will have fewer shards and hence fewer inverted indices on each node

  2. As per this article,, will moving to 7.7 help reduce pressure on heap?

Kindly help on the above, our nodes keep going down because of pressure on heap

Nitish Goyal

Which version are you currently on? Going to the latest version should reduce heap usage but doing so on a 200 node cluster may take a while, especially if you are on an older version.

If you have fast disks you can try to forcemerge indices that are no longer written to down to a single segment as this has the potential to save quite a bit of heap. This may help you without having to migrate, but exactly how much it helps depends on your data.

We are currently using 6.8 and 3 months back did a rolling upgrade from 6.3.0. Each node has 6T disk attached and we have around 600 TB data in the cluster

We already have a cron job in place which force merges segments for indices not actively written

Shall we look for ?

  1. Reducing shards per index? Read in a blog that each shard should have less than 50GB of data
  2. If not option 1, shall we go for shrinking indices?

Can you suggest any other recommendations?

That is good to hear.

I do not think this will help much. You have a reasonable shard sizer and a reasonable number of shards per node.

I do not think this will help either as your shards are already reasonably large and you are forcemerging them down to a single segment.

As you have large hosts I was initially going to suggest having more than one node per host, but given that you already have 200 nodes and disks are reasonably full, such migration would be difficult and increase the number of nodes in the cluster beyond what I would generally recommend.

Another way would be to adopt using GIGC which has support for larger heaps. It requires a recent Java version and possibly also a newer version of Elasticsearch. A larger heap would however mean you no longer benefit from compressed pointers and have a smaller file system page cache which reduces the potential gain.

It would be good to get a better idea of the cluster and see if there is anything unusual. Could you please provide the fill output of the cluster stats API? Not sure there is any big gains to make, so upgrading to the latest version might be the best way forward.

Cluster Stats API

_nodes: {
total: 211,
successful: 211,
failed: 0
cluster_name: "prd-xx",
cluster_uuid: "xxx",
timestamp: 1602565799094,
status: "green",
indices: {
count: 4588,
shards: {
total: 35848,
primaries: 17924,
replication: 1,
index: {
shards: {
min: 2,
max: 160,
avg: 7.813426329555361
primaries: {
min: 1,
max: 80,
avg: 3.9067131647776807
replication: {
min: 1,
max: 1,
avg: 1
docs: {
count: 194556935778,
deleted: 163123439
store: {
size_in_bytes: 503410814095348
fielddata: {
memory_size_in_bytes: 289599575936,
evictions: 767911
query_cache: {
memory_size_in_bytes: 516833416064,
total_count: 299279498997,
hit_count: 10647121478,
miss_count: 288632377519,
cache_size: 14126195,
cache_count: 103558469,
evictions: 89432274
completion: {
size_in_bytes: 0
segments: {
count: 860459,
memory_in_bytes: 1521779458155,
terms_memory_in_bytes: 1429778014307,
stored_fields_memory_in_bytes: 49775618856,
term_vectors_memory_in_bytes: 0,
norms_memory_in_bytes: 8479335168,
points_memory_in_bytes: 28213633220,
doc_values_memory_in_bytes: 5532856604,
index_writer_memory_in_bytes: 60621898486,
version_map_memory_in_bytes: 780473520,
fixed_bit_set_memory_in_bytes: 2400,
max_unsafe_auto_id_timestamp: 1573720765787,
file_sizes: { }
nodes: {
count: {
total: 211,
data: 198,
coordinating_only: 0,
master: 3,
ingest: 211
versions: [
os: {
available_processors: 3340,
allocated_processors: 3340,
names: [
name: "Linux",
count: 211
pretty_names: [
pretty_name: "Ubuntu 16.04.5 LTS",
count: 211
mem: {
total_in_bytes: 18890354393088,
free_in_bytes: 313250426880,
used_in_bytes: 18577103966208,
free_percent: 2,
used_percent: 98
process: {
cpu: {
percent: 4009
open_file_descriptors: {
min: 5047,
max: 6302,
avg: 6030
jvm: {
max_uptime_in_millis: 9547783916,
versions: [
version: "1.8.0_252",
vm_name: "OpenJDK 64-Bit Server VM",
vm_version: "25.252-b09",
vm_vendor: "Private Build",
count: 60
version: "1.8.0_265",
vm_name: "OpenJDK 64-Bit Server VM",
vm_version: "25.265-b01",
vm_vendor: "Private Build",
count: 151
mem: {
heap_used_in_bytes: 4727781333392,
heap_max_in_bytes: 6279137591296
threads: 43286
fs: {
total_in_bytes: 1267746304614400,
free_in_bytes: 762737566871552,
available_in_bytes: 762737180995584
plugins: [ ],
network_types: {
transport_types: {
security4: 211
http_types: {
security4: 211








It is not recommended or supported to use G1GC with Java8. If you download the latest version, which uses G1GC, you can see the following in the jvm.options file:

## G1GC Configuration
# NOTE: G1 GC is only supported on JDK version 10 or later
# to use G1GC, uncomment the next two lines and update the version on the
# following three lines to your version of the JDK
# 10-13:-XX:-UseConcMarkSweepGC
# 10-13:-XX:-UseCMSInitiatingOccupancyOnly

In order to use G1GC I believe you should therefore upgrade your JVM.


  1. Would you like to recommend anything based on cluster stats api response?

  2. Upgrading each box to Java > 10 is a tedious process for us considering it's a 200 node cluster, but we will keep this in mind and upgrade when we have the bandwidth.

  3. Again asking, shall we increase the max limit size of each shard from 30GB to 50GB considering we have 10G network and recovery would be quite faster. Will be observe decrease in heap usage if we reduce number of shards from 200 per node to 160 shards per node?

  4. We will add these settings into our jvm.options

Not that I can see. I think upgrading to Elasticsearch 7.9 is probably the best way to address this.

I am not sure to what extent this will help, so you may want to test first.

I do not think this will save you a lot of heap as you are already forcemerging down to a single segment, but can not know for sure.

That is only applicable when running G1GC, which is not recommended on Java8.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.