Intermittent connectivity issues between Filebeat and Logstash

I have few Filebeat containers and a single logstash container running on a paticular namespaces of a kubernetes cluster.

However, I see intermittent connectivity issues between Filebeat and Logstash. (Logs below)
Due to this I see that Filebeat repeatedly sends the same events to Logstash.

<>|2020-01-23T15:43:14.925Z|ERROR|logstash/sync.go:155|Failed to publish events caused by: read tcp 100.96.10.3:59614->100.68.143.9:5044: i/o timeout|
|---|---|---|---|
|2020-01-23T15:43:16.813Z|ERROR|pipeline/output.go:121|Failed to publish events: read tcp 100.96.10.3:59614->100.68.143.9:5044: i/o timeout|
|2020-01-23T15:43:16.814Z|INFO|pipeline/output.go:95|Connecting to backoff(tcp://local-logstash:5044)|
|2020-01-23T15:43:16.818Z|INFO|pipeline/output.go:105|Connection to backoff(tcp://local-logstash:5044) established|</>

I have the same issue. Im running both Logstash and Filebeat 7.5.2 on Docker. Both containers are in the same docker-network on the same host. Moving the Filebeat out of Docker onto the host has the same issue. Recreating the containers+networks or restarting the Docker-daemon does not solve the problem either.
Filebeat manages to write events to Logstash, but is reconnecting every 60s due to this i/o timeout error.
As far as I can tell, there is no networking issue between these containers. Ping, telnet and curl do work flawlessly.

Edit: Using Filebeat and Logstash Image 7.5.1 does not have that issue in the very same setup.

Btw, I'm also able to telnet into logstash port from filebeat container

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.