Filebeat throws the following error message:
Failed to publish events caused by: read tcp 127.0.0.1:53380->127.0.0.1:5044: i/o timeout
2020-02-03T15:45:46.987+0530 ERROR logstash/async.go:256 Failed to publish events caused by: client is not connected
2020-02-03T15:45:48.415+0530 ERROR pipeline/output.go:121 Failed to publish events: client is not connected
2020-02-03T15:45:48.416+0530 INFO pipeline/output.go:95 Connecting to backoff(async(tcp://localhost:5044))
2020-02-03T15:45:48.418+0530 INFO pipeline/output.go:105 Connection to backoff(async(tcp://localhost:5044)) established
This is happening for every 30s and filebeat publishes the same log repeatedly to the elastic search. I have set client_inactivity_timieout in logstash as well but no use.
This is causing the same records getting indexed in the elastic search.
Hi,
I had the exact same problem.
Filebeat was sending the events and displaying this error, even if the events were recorded into Logstash and were showing up on Kibana. The same event was showing up multiple times because Filebeat kept trying to send it.
Anyways, the solution was rolling back the entire stack to 7.5.1.
Solved everything.
I suggest you try that.
Hi,
This was happening when filebeat writing to logstash. I had to introduce kafka in between filebeat and logstash and this seems to works fine. Had to look into this when filebeat writes to logstash.
I run everything through Logstash here, and nothing gets to Logstash at the moment. Only when I restart Filebeat will it send the logs that were stuck since the last restart. New logs are not moving on......
I use centralized management of Filebeat. If I go back the original (non-centralized) way, it all works fine.
If you had something that worked in version 7.5.1, I suggest you roll back to that version.
I go straight from filebeat to logstash and everything is done on Docker and they are on the same bridge network.
I don't know if you are using Docker. If not then that's might not be your problem but you could always give it a try.
Be sure to clean any residual file of configuration or data. You want to start again in 7.5.1 on a clean board.
Something that worked for someone else was using a tempo like Kafka between FIlebeat and Logstash:
Filebeat --> Tempo --> Logstash
thanks for your quick reply. I am not using Docker. I just installed the latest Stack in Windows 10 machine. For few days, I am trying to get a solution. But not succeeded yet.
Setting client_inactivity_timeout in logstash config did not help in my case. I'm using filebeat 7.6.0.
Combination of 7.5.1 filebeat and 7.6.0 logstash did not help either.
7.5.1 both versions are working correctly.
I'd say it's logstash to blame.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.