GC OverHead Collecting Frequently

Hello, I am seeing GC overhead collection calls frequently in my ES cluster. ES is deployed on AWS.
JVM memory usage is high and average is 96% max, 79% min. I appreciate for any idea and suggestion.

Elasticsearch version: 6.8.6
total number of Documents = 2,156,894
Number of cluster: 1
Number of node: 4
EC2: (4 CPU, 16GB memory)
Number of indices without system indices: 4 indices
JVM Memory setting per node : 9GB

Index mapping setting:
Number of shards: 5 (default)
Number of replicas: 1 (default)

    5/17/2020, 7:11:50 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376244] overhead, spent [314ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:11:50.773+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:11:51 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376245] overhead, spent [487ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:11:51.773+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:11:52 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376246] overhead, spent [316ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:11:52.774+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:11:53 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376247] overhead, spent [623ms] collecting in the last [1.1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:11:53.935+00:00","level":"WARN","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:11:54 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376248] overhead, spent [314ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:11:54.935+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:11:55 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376249] overhead, spent [321ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:11:55.935+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:11:56 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376250] overhead, spent [337ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:11:56.936+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    \u001B[0m","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][http_server_worker][T#1]","@timestamp":"2020-05-18T02:12:08.660+00:00","level":"INFO","logger_name":"tech.beshu.ror.accesscontrol.logging.AccessControlLoggingDecorator"}
    5/17/2020, 7:12:20 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376274] overhead, spent [323ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:12:20.941+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:12:21 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376275] overhead, spent [440ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:12:21.941+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:12:23 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376276] overhead, spent [552ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:12:23.020+00:00","level":"WARN","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:12:24 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376277] overhead, spent [328ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:12:24.020+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:12:25 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376278] overhead, spent [631ms] collecting in the last [1.3s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:12:25.328+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:12:26 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal","message":"[gc][4376279] overhead, spent [319ms] collecting in the last [1s]","thread_name":"elasticsearch[prod2-ip-10-191-2-240.ec2.internal][scheduler][T#1]","@timestamp":"2020-05-18T02:12:26.329+00:00","level":"INFO","logger_name":"org.elasticsearch.monitor.jvm.JvmGcMonitorService"}
    5/17/2020, 7:12:34 PM	es-search[2304]	{"@version":1,"source_host":"prod2-ip-10-191-2-240.ec2.internal"

What load is the cluster under? How large are your documents and indices? What type of instances are you using?

Thanks Christian for reply.

Cluster load average per node: 2.14 max, 1.2 min
Ingest in a day: around 3000 calls
Search query in a day to ES: Around 42K

Type of instance: EC5 m5-xlarge

Total document count: 2,156,894
index1 -> Document Count: 129.3k, Data: 150.2 MB
index 2 -> Document Count: 53.4k, Data: 11.5 MB
index 3 -> Document Count: 598k, Data: 338.8 MB
index 4 -> Document Count: 20.3k, Data: 18.7 MB

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.