Dear community,
in my logstash 7.8.1 logs I get this error message quite a lot:
[2021-09-07T08:36:56,031][ERROR][logstash.inputs.tcp ][main] Error in Netty pipeline: java.io.IOException: Connection reset by peer
It only occurs when data is sent from google/kubernetes.
Also: In these cases we have data loss.
Do you know a reason and / or a workaround for this?
Our input plugin part looks like this:
input {
redis {
host => "127.0.0.1"
data_type => "list"
key => "logstash"
codec => json
threads => 8
add_field => { "[@metadata][datatype]" => "log" }
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "filebeat"
codec => json
threads => 6
add_field => { "[@metadata][datatype]" => "log" }
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "winlogbeat"
codec => json
threads => 1
add_field => { "[@metadata][datatype]" => "log" }
}
redis {
host => "127.0.0.1"
data_type => "list"
key => "beat"
codec => json
threads => 4
add_field => { "[@metadata][datatype]" => "beat" }
}
beats {
id => "beats-input"
port => 5044
add_field => { "[@metadata][indexprefix]" => "%{[@metadata][beat]}-%{[@metadata][version]}" }
}
tcp {
port => 5518
codec => "fluent"
tcp_keep_alive => true
tags => ['basic','docker','json']
}
}
Could be some kind of timeout / too much data problem.
Thanks for any insight you can give in this!
Cheers