Gc overhead while index data using logstash

Hey Rajesh, sorry for the late reply, I've been off for a while.

The recommendations for JVM heap configuration in Elasticsearch are generally based on having the service running exclusively on the node. You can find an in depth discussion about the parameters and configuration at https://www.elastic.co/blog/a-heap-of-trouble. Be sure to stick to around 31 GB max to avoid the uncompressed memory pointers.

Regarding the Logstash performance issues we have guidance on how to look into this further available at https://www.elastic.co/guide/en/logstash/current/performance-troubleshooting.html and https://www.elastic.co/guide/en/logstash/master/tuning-logstash.html.

What you describe sounds a bit like it might be either Logstash or Elasticsearch running out of memory and garbage collecting.
There are some considerations around this as well in https://discuss.elastic.co/t/logstash-heap-size-vs-elasticsearch-heap-size/133662.

Ideally you would like to separate both services and configure each node individually along the best practices given in the linked documentation.

All the best