Why is my heap usage always high?

I have a cluster with 8 nodes and all the nodes almost always show a heap usage in the high 70%. I never seem to see a "pretty jigsaw" pattern in the heap usage.

All the nodes are in individual systems that have 64GB of RAM. The nodes are given 24GB of RAM. So they should have plenty of space.

I've looked at fielddata size and it is almost always less than 1GB. I've cleared it several times (through both restarting the node and running the clear cache command).

We have 2446 indices (we have several indices that are daily rollovers and have data back for almost 1 year). Most of the indices have 4 shards plus a backup. The total number of shards is 17745.

Is heap usage supposed to be always high? How do I find out what is using heap usage?

Oh, we are also running ES 1.7.3

That's too much! You are probably wasting a lot of resources with that alone.

1 Like

Is 4 shards plus 4 backup shards per index too much?

How much data in the cluster?

Cluster stats says indices.store.size = 15.8T and it has 5.6 billion docs

Yeah, so about 8 GB per shard, way too small.

Aim for <50GB per shard by reducing the shard count, or use weekly/monthly indices instead to increase the data in each shard.

Ok, I'll try that out. Thanks


We are still dealing with high heap usage. We've changed how we rollover our indices to now be size based. The total number of shards is now 5365 and the total size is 160T. That comes to about 30G per shard.

On one of the nodes, there are 318 shards. The doc count is 4.2B and the size on disk is 9.5T. The heap usage is 80% (27.4G). Query cache memory is 350M, fielddata memory is 350M, and segments memory is 14G (terms is 10.1G, stored_fields is 2.4G, norms is 1M, and doc_values is 1.5G). This particular node will climb from 75% heap usage to 80% heap usage in 3 minutes and then repeat.

Any ideas on how to decrease the ram usage without adding more ES instances?

At this stage you will just need to add more nodes.