Uneven JVM usage in cluster 40% to 74% -- can I even this out?

My es cluster is 8 identical nodes being fed web logs from Logstash. All of the nodes are listed in the Logstash output so I am assuming that they are equally bearing the import load.

Each node has 64GBytes RAM with the ES Heap set to the recommended 30GBytes

The java heap usage varies from 40% on lowest jvm usage node to 70% (or more) on the highest usage node. I usually have one or two nodes about 70%, 3 or 4 nodes between 50% and 67% java heap usage and 2 or 3 between 30% and 45%. My Master node is almost always in this bottom group.

Knowing that the java heap goes into garbage collection at 75%, I'd like to even out the java heap usage across all nodes. Or at minimum bring the top and bottom usage ones into the middle ground.

Does anyone have any suggestions about how to balance this java heap usages or some tools I can use to see why the same nodes are always in the top group using almost 75%? I have looked at these top nodes and do not see anything different running on them that the other nodes.

Your Java heap usage should ideally exhibit a saw-tooth pattern over time, so having different nodes with different heap usage at different points in time is not necessarily a problem, but rather expected. Are you monitoring heap usage, e.g. through X-Pack Monitoring, so you can visualise and compare the patterns for the different nodes over time? If so, could you share this?

Thanks for the quick reply Christian. Now that the cluster has been running for a few days I see the pattern and the nodes seem to take turns garbage collecting so things seem steady.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.