Filebeat works but keeps generating ERROR Messages in Logs

Hi there,

I am running Filebeat (7.3.1) on multiple Systems and all of them are sending their data to Logstash (Elasticsearch) as desired. I can see the events in Kibana perfectly.
But I realized that every Filebeat instance keeps writing Error messages in their own logfiles like:

2020-02-28T10:13:55.354+0100 ERROR logstash/async.go:256 Failed to publish events caused by: write tcp 10.193.196.86:45350->10.193.196.93:5044: write: connection reset by peer
2020-02-28T10:13:57.008+0100 ERROR pipeline/output.go:121 Failed to publish events: write tcp 10.193.196.86:45350->10.193.196.93:5044: write: connection reset by peer

This is happening every time they try to send an event to Logstash (which like I said actually works).

I am running two Elasticsearch/ Logstash Nodes and one seperate Kibana Node. (All on 7.3.1)

My Logstash input config looks like this:

input {
beats {
port => 5044
client_inactivity_timeout => 3000
}
}

output {
if ![type] {
pipeline { send_to => "default" }
}
else if ( [type] == 'testpipe' ) {
pipeline { send_to => 'testpipe' }
}
......

The ouput part continues with multiple pipeline redirections depending on the custom 'type' field.

My Filebeat ouput:

output.logstash:
hosts: ["lSnode1:5044", "lSnode2:5044"]
loadbalance: true

What I have tried so far:

  • update the logstash-beats-input plugin to the newest version
  • add 'client_inactivity_timeout => 3000' to my LS pipe

...but neither does seem to work for me.

I am open for any ideas.
If you need any more information just ask.

tcp flows are well open between these two machines: 10.193.196.86:45350->10.193.196.93:5044 ?

Yes.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.