JVM heap over time graphs

I have a question regarding the JVM Heap Space over time. There is a wonderful documentation about that here:

but it does not show how the JVM Heap Space should look like in an ideal setup.
We are running Elasticsearch on Bluemix Kubernetes and have 18 Nodes. There is a curl command querying elasticsearch for the max heapPercent variable in the /_cat/nodes API every 15 minutes and our graphs look like this for a four hour timeframe:

According to the documentation from the top, the up and down indicates too much heap but we also have nodes that are flat and all share the same configuration. The flatliners are master and data nodes alike and so are the ones that go up and down.

Occasionally we get GC INFO messages in the gc.log and elastic.log files:
[INFO ][o.e.m.j.JvmGcMonitorService] [master-node-x] [gc][419468] overhead, spent [255ms] collecting in the last [1s]

Do you have any documentation or thoughts on that?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.