Issue: Indexing time drastically increased from 15 mins to 8 hrs for the same amount of data in past few weeks. We didn't really make any changes in particular to any of the nodes in the cluster. GC is taking a lot of time. [o.e.m.j.JvmGcMonitorService][gc][83699] overhead, spent [809ms] collecting in the last [1.2s]
Cluster details: 1 master & 2 clients. (16 GB memory), 6 data nodes (500 GB storage, 32 GB memory). Memory related settings on nodes include bootstrap.memory_lock: true, MAX_OPEN_FILES=65536,MAX_LOCKED_MEMORY=unlimited' in /etc/sysconfig/elasticsearch, and half memory allocated to elastic search on all nodes.
Has anyone this GC issue or have any suggestions on how we can resolve it?
Thanks in advance!
Can you please share the JVM options of the running node? The easiest way to obtain this is by using jps -l -m -v on the running server and it will provide the JVM options for all running Java processes that your user has permissions to see.
What is your indexing pattern? Are you using bulk requests? What is the size of the bulk payload? Do you use auto-generated IDs? Do you have any monitoring that shows heap usage over time?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.