We are on ES version 6.8 and Java 11 or higher . Is this checking for garbage collection metrics even needed anymore.
My assumption
G1 GC is fast enough that even with max recommended size (~32 GB) of heap you are never going to have a 30 sec stop the world garbage collection .
Please correct me if I am wrong since I am doing a Elastic search 101 presentation and do not want obsolete things to be presented.
If you mean the check related to compressed oop, then yes, it is still recommended to have the HEAP below this threshold.
You said you are on version 6.8, is your presentation based on this version? If so it is already obsolete, 6.8 is not supported or maintained anymore and a lot of things changed from 6.X to 7.X and from 7.X to 8.X.
I am aware that 6.8 is obsolete but that is what we have installed . There is active work to move to 7.X but that is in pipeline.I am from OPS team and bringing others in team upto speed on current setup.
I was more curious about the fact that the article mentioned having too large of jvm heap for cluster might cause GC stop the world situations which might go larger than 30 sec causing ES to believe the node is down .
With new G1 GC implemented and if I have nodes with 64 GB ram can I just set JVM ~ 32 GB always (under threshold on compressed OOPS) and not worry about fine tuning JVM based on usage or concerns of Garbage collection so long that ES thinks node is dead ?
Not doing any finetuning yet. New to Elasticsearch and my team. Putting up a presentation for spreading knowledge on what performance metrics we should monitor .
Was curious if JVM Heap Usage metric is useful to monitor for ES 6.8 forward which has G1 GC .
If I understand right G1 GC is fast enough that even with max size heap (32 GB) we do not really need to be concerned with timeout during stop the world GC of the heap.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.