Filebeat :- Failed to publish events caused by: client is not connected

Filebeat throws the following error message:
Failed to publish events caused by: read tcp 127.0.0.1:53380->127.0.0.1:5044: i/o timeout
2020-02-03T15:45:46.987+0530 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-03T15:45:48.415+0530 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-03T15:45:48.416+0530 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
2020-02-03T15:45:48.418+0530 INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:5044)) established

This is happening for every 30s and filebeat publishes the same log repeatedly to the elastic search. I have set client_inactivity_timieout in logstash as well but no use.

This is causing the same records getting indexed in the elastic search.

Please let me know how to resolve this issue.

Logstash version : 7.5.2
Filebeat version : 7.5.2

Hi,
I had the exact same problem.
Filebeat was sending the events and displaying this error, even if the events were recorded into Logstash and were showing up on Kibana. The same event was showing up multiple times because Filebeat kept trying to send it.
Anyways, the solution was rolling back the entire stack to 7.5.1.
Solved everything.
I suggest you try that.

Hi,
This was happening when filebeat writing to logstash. I had to introduce kafka in between filebeat and logstash and this seems to works fine. Had to look into this when filebeat writes to logstash.

I have the same problem but while running:

  • Logstash 7.5.1
  • Filebeat 7.5.1

I run everything through Logstash here, and nothing gets to Logstash at the moment. Only when I restart Filebeat will it send the logs that were stuck since the last restart. New logs are not moving on......

I use centralized management of Filebeat. If I go back the original (non-centralized) way, it all works fine.

/Tim

i get the same error.
Filebeat, Logstash, Elasticsearch : version 7.5.2

FIlebeat to Elastic search : Works fine
but Filebeat to Logstash is giving trouble.

2020-02-10T10:32:20.109-0500 ERROR logstash/async.go:256 Failed to publish events caused by: read tcp 127.0.0.1:58722->127.0.0.1:5044: i/o timeout
2020-02-10T10:32:20.113-0500 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-10T10:32:21.623-0500 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-10T10:32:21.623-0500 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
2020-02-10T10:32:21.626-0500 INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:5044)) established

Sending same info again and again to logstash.

Please suggest me, where I am missing.

If you had something that worked in version 7.5.1, I suggest you roll back to that version.
I go straight from filebeat to logstash and everything is done on Docker and they are on the same bridge network.

I don't know if you are using Docker. If not then that's might not be your problem but you could always give it a try.

Be sure to clean any residual file of configuration or data. You want to start again in 7.5.1 on a clean board.

Something that worked for someone else was using a tempo like Kafka between FIlebeat and Logstash:
Filebeat --> Tempo --> Logstash

Hope this helps

thanks for your quick reply. I am not using Docker. I just installed the latest Stack in Windows 10 machine. For few days, I am trying to get a solution. But not succeeded yet.

Okay. Issue is solved. I just added the client_ inactivity_timeout parameter in logstash conf file.

Setting client_inactivity_timeout in logstash config did not help in my case. I'm using filebeat 7.6.0.
Combination of 7.5.1 filebeat and 7.6.0 logstash did not help either.
7.5.1 both versions are working correctly.
I'd say it's logstash to blame.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.