I wanted to understand what is the below task that is executing from past 76 days. Is it expected to run for such a long time or is there any issue with my elasticsearch cluster.
Also, we frequently face long GC pauses, was suspecting if this could be one of the reason for the issue.
In which case I would suspect #36770, which fixes a race condition that can lead to these long-running tasks. The fix is in versions ≥6.5.5.
Each such task only takes a small amount of heap, and prevents the parent tasks from completing too. The parent tasks will be holding onto a whole bulk request, which is normally not that large (kilobytes rather than gigabytes). It does seem more likely to trigger this race condition when under load (heavy GC etc) but it's not normally the cause of the GC pressure.
Thanks for the response. Also, may I know what could be the reason of the frequent and long GC pauses. As the long gc pauses, makes the elastic node inactive and which further makes my logstash machine think that there is dead es- instance.
We use Logstash as consumers from kafka cluster. Because of the long GC pauses, logstash stops taking any load and thus there is a lag in the kafka consumer.
Kafka-version : 0.10.1.1
Logstash version: 5.2.2
Elasticsearch version : 5.4.2
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.