Co-ordinating node JVM heap size increase

I have a cluster of 3 master-eligible, data and ingest nodes and 2 co-ordinate only nodes. Running on version 6.8.6.
I created a Data Table visualisation with a Unique count of a field as a Metric and 2 sub-buckets that are Terms Aggregations (both with a size of 10) and I noticed for the first time, that the JVM heap on the co-ordinating nodes increased dramatically.
The query did target ~250 indices (1 shard, 1 replica per index) but I run wildcard queries like that all the time and never before had this affected the co-ordinating nodes. Is there an explanation for this?
Thank you.

Hey @kmousikos,

There are so many factors that can contribute to heap utilization, so it's pretty hard to say based on the information you provided.

Did the heap usage go down over time? It's not at all uncommon to see the heap utilization move over time. If you were to chart it, it often looks like a zig-zag pattern, where the usage increases for a while, and then suddenly falls when the JVM performs garbage collection:

Did this visualization appear to run slowly? If so, we can try to profile it to see if there is something expensive about the underlying query that we can optimize for. To do this, go to your visualization, and click on the inspect link at the top. Next, in the flyout that appears, view the Request, and copy the Elasticsearch query that it shows you.

Next, navigate to Kibana's Dev Tools, and paste the query into the Search Profiler. After clicking the Profile button, you'll be shown a breakdown of all the steps Elasticsearch had to take in order to fulfill the request, and how much time it spend in each phase.

Hello @Larry_Gregory and thank you for the reply. I did not provide more details about the query as this was a more generic question regarding the resource usage of co-ordinating only nodes. And that's why I posted it on the Kibana forum. I have been working on and scaling the same cluster for more than one year and never before have I noticed heap and CPU usage fluctuations on the co-ordinating only nodes (which is in keeping with relevant webinars that describe co-ordinating only node requirements as LOW in memory, LOW in compute and LOW in storage). The two spikes that you see in the screenshot caused the Elasticsearch service running on that EC2 instance to stop. The two EC2 instances that act as co-ordinating only nodes also host the two Kibana instances of the cluster. Until today the JVM heap usage is stable at ~55% as is shown in the right end of the graph line.

Hi @Larry_Gregory. Is there probably a more suitable forum to get an answer on whether an increase in compute and memory resources is something that has to be be expected on co-ordinating only nodes?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.