Lumberjack output plugin posts duplicate entries when there is a connection error


(Manoj kumar K K) #1

Duplicate entries are posted when the following error happens.


Client write error, trying connect {:e=>#<IOError: Connection reset by peer>, :backtrace=>["org/jruby/ext/openssl/SSLSocket.java:950:in syswrite'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/jls-lumberjack-0.0.26/lib/lumberjack/client.rb:107:insend_window_size'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/jls-lumberjack-0.0.26/lib/lumberjack/client.rb:127:in write_sync'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/jls-lumberjack-0.0.26/lib/lumberjack/client.rb:42:inwrite'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-lumberjack-3.1.7/lib/logstash/outputs/lumberjack.rb:65:in flush'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/buffer.rb:219:inblock in buffer_flush'", "org/jruby/RubyHash.java:1343:in each'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/buffer.rb:216:inbuffer_flush'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/buffer.rb:159:in buffer_receive'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-lumberjack-3.1.7/lib/logstash/outputs/lumberjack.rb:52:inblock in register'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-codec-json-3.0.5/lib/logstash/codecs/json.rb:42:in encode'", "/root/logstash-6.4.0/vendor/bundle/jruby/2.3.0/gems/logstash-output-lumberjack-3.1.7/lib/logstash/outputs/lumberjack.rb:59:inreceive'", "/root/logstash-6.4.0/logstash-core/lib/logstash/outputs/base.rb:89:in block in multi_receive'", "org/jruby/RubyArray.java:1734:ineach'", "/root/logstash-6.4.0/logstash-core/lib/logstash/outputs/base.rb:89:in multi_receive'", "org/logstash/config/ir/compiler/OutputStrategyExt.java:114:inmulti_receive'", "org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:97:in multi_receive'", "/root/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:372:inblock in output_batch'", "org/jruby/RubyHash.java:1343:in each'", "/root/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:371:inoutput_batch'", "/root/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:323:in worker_loop'", "/root/logstash-6.4.0/logstash-core/lib/logstash/pipeline.rb:285:inblock in start_workers'"]}


Any idea why this error happens? I am using logstah 6.3.2 in both server and client side.

and my yml file looks like...

##########################
#https://www.elastic.co/guide/en/logstash/current/ls-to-ls.html

input {
file {

path => "/home/abcdef/kibana_in/in.json"
start_position => "beginning"
sincedb_path => "/home/abcdef/sincedb_path.txt"
codec => "json"
}
}

filter {

split { }

}

output {
lumberjack {
codec => json
hosts => "abcd.abcd.com"
ssl_certificate => "/home/abcdef/lumberjack.cert"
port => 31333

}
file {
path => "/home/abcdef/file.out.txt"
}
}
#####################################

Please help !!!


(Magnus B├Ąck) #2

If the connection breaks after the lumberjack output has sent the data but before it has received the acknowledgement from the peer that data payload will get sent again. There's nothing to do about that.


(Manoj kumar K K) #3

Hi,

Any idea why this connection error happens frequently? Is there any suggestion to avoid duplicate entries if the error is unavoidable?


(Christian Dahlqvist) #4

Most components in the Elastic stack offer an at-least-once delivery guarantee with respect to network errors like this, so if issues in your network can not be avoided, duplicates are difficult to avoid throughout the pipeline. This is however instead often handled at the destination system. When data e.g. is sent to Elasticsearch, one can as outlined in this blog post specify an external document ID. This makes Elasticsearch index the first document, but apply all duplicates having the same document IDs as updates thereby replacing the first document instead of adding them as separate documents.


(Manoj kumar K K) #5

Thanks a lot


(system) #6

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.