Huge Time Delay between logstash and elasticsearch

My current pipeline is:

Rsyslog -> Kafka -> Logstash -> ES (5 nodes)
I see a huge time delay (Arnd 10hrs) between the logs processed by Kafka and the logs in my ES cluster.
the input and output plugin in logstash looks like this:

input {
kafka {
zk_connect => 'host:port'
topic_id => 'abc'
consumer_threads=> 50
codec => json
}
}

Kafka currently has 50 partitions.

output {
elasticsearch {
template => "/export/logstash_new/elasticsearch-template.json"
hosts =>["host1","host2","host3","host4","host5"]
template_overwrite => true
manage_template => true
codec=>plain
}
}

When I run my pipeline for a shorter duration (2 to 3 hours), no time delay is noticed.
However, gradually, as the time increases, the delay increases too.

How can I figure out where the problem currently resides?
Is logstash failing to process data? Or is there a problem in the indexing of Elasticsearch?

Current load of data is 6k messages per minute. (The load fluctuates)

Are you monitoring your kafka topic lengths, as well as the rest of the pipeline (CPU etc)?

Hi.. The problem got solved. The issue was in the indexing of Elasticsearch. There were 5 nodes in the cluster. Out of which 4 of them were data nodes and 1 was master as well as data node. In the logstash output plugin, I had given all 5 nodes.

The issue was solved when I created a separate client node for the communication between Logstash .

So now my ES cluster looks like this:

1 Client node (Role - Handing requests, load balancing)
1 Master node (Role - Cluster health management)
3 Data Nodes

Running with a single master eligible node creates a single point of failure. You should look to have 3 master eligible nodes in the cluster (with minimum_master_nodes set to 2) in order to improve resiliency and availability.

Okay... Thanks ..