After upgrading to Elasticsearch 7.8.1 from 7.6.1, the CPU usage is reaching 100% most of the time on all 4 nodes. We didn't have this issue before the upgrade.
The GC is almost 0% after the upgrade. It was noticed that the java version has been upgraded to 14. But the jvm.options file is having the config for the previous version. We are yet to apply these changes and would like to know if this GC and high CPU usage are related to this configuration.
I have added the JVM variables for java 14 and still, the CPU utilization is reaching 100% and production application went down as well. We never had this issue with 7.6.1.
We are planning to downgrade to 7.6.1. Would there be any data loss as it's within version 7?
We have 3 master nodes and 1 data node.
I have upgraded the version to 7.9.1 but the issue is still there.
When I calculated the total data nodes required based on the formula mentioned in the document below, the data nodes required is increasing with response time ( Average search response time in milliseconds ). But with more nodes, the response time should reduce. Did I understand this correctly?
Calculation:
Average search response time = 2 sec (2000 milli seconds)
(100x2000)/1000 = 200
(20x2x3/2)+1 = 61
200/31 = 3 nodes
Average search response time = 10 sec (10000 milli seconds)
100x10000/1000 = 1000
(20x2x3/2)+1 = 61
1000/31 = 16 nodes
Also, regarding the heap size, the document says: "This should be 50% of available RAM, and up to a maximum of 30GB RAM to avoid garbage collection." In our case the RAM of the server is 160GB and the heap size is 80GB. Is this a problem?
Set Xmx and Xms to no more than the threshold that the JVM uses for compressed object pointers (compressed oops); the exact threshold varies but is near 32 GB.
If you use 80gb of heap space, you will not be using compressed object pointers and will face issues.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.