Filebeat performance stall sometimes

# Issue started here
Sent: 2048/2048
Sent: 1806/1806
Sent: 242/242
Acked: 2048

Yes, this is why I suggested to try to increase the bulk_max_size, that could be involved here, and suspiciously defaults to 2048. Did you see any difference in these values after increasing bulk_max_size?

There is a possible bug in the logstash output that can make a beat to continuously retry to send the same batch of events if any of the events is rejected. I created an issue for that some time ago, but we are not sure of the conditions when it happens https://github.com/elastic/beats/issues/11732. In any case if this is the issue we should see something about failed events in filebeat or logstash logs.

Would it be an option for you to try to send events directly from filebeat to Elasticsearch? Without logstash? So we can confirm if this is an issue with filebeat, or with logstash or the logstash output.