Do you have monitoring enabled for your cluster? If so you can check the java heap utilizations and other memory utilizations and see if there are any correlations between high heap/memory usage and the GC.
For example, if heap is running out the GC process will be triggered and if there resourses are strained on your cluster, this might cause the cluster to be busy with GC instead of serving requests, thus increasing the response times.
Whats the status on CPU load and resources? 1s for GC sounds very long which makes me think it might be resource constrained.. In our environments usually long GC times are always connected to a high usage of CPU resources, either due to a core being saturated, or to the load being too high
According to the official support matrix it does not, if I am reading it correctly, seem like Azul Zing has been supported for quite a while. Is this issue reproducible with the bundled/supported JVM?
Do you have any external process pulling data every 10 minutes that might be resource intensive and trigger this?
@joean407 CPU was at ~44%. heap utilisation is at ~70%
@Christian_Dahlqvist we don't have any external process pulling data every 10 minutes.
yes i also gone through the support matrix and it says it doesn't support Azul Zing.
we haven't tried with any other JVM till now
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.