Logstash exists with unexpected error

I use version 5.6.2 and 6.2.3 in my production env.My logstash exists frequently because of fllowing error:

[2019-03-05T08:49:19,668][FATAL][logstash.runner ] An unexpected error occurred! {:error=>#<Errno::EMSGSIZE: Message too long - No message available>, :backtrace=>["org/jruby/ext/socket/RubyUDPSocket.java:438:in send'", "XXXX/logstash/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-output-udp-3.0.5/lib/logstash/outputs/udp.rb:24:inblock in register'", "XXXX/logstash/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-codec-json-3.0.5/lib/logstash/codecs/json.rb:42:in encode'", "XXXX/logstash/logstash-6.2.3/vendor/bundle/jruby/2.3.0/gems/logstash-output-udp-3.0.5/lib/logstash/outputs/udp.rb:31:inreceive'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/outputs/base.rb:92:in block in multi_receive'", "org/jruby/RubyArray.java:1734:ineach'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/outputs/base.rb:92:in multi_receive'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/output_delegator_strategies/legacy.rb:22:inmulti_receive'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/output_delegator.rb:49:in multi_receive'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:479:inblock in output_batch'", "org/jruby/RubyHash.java:1343:in each'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:478:inoutput_batch'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:430:in worker_loop'", "XXXX/logstash/logstash-6.2.3/logstash-core/lib/logstash/pipeline.rb:385:inblock in start_workers'"]}
[2019-03-05T08:49:19,978][ERROR][org.logstash.Logstash ] java.lang.IllegalStateException: org.jruby.exceptions.RaiseException: (SystemExit) exit

Here is my logstash config :

input {
beats {
port => 5150
}
}

filter {
truncate {
fields=>"message"
length_bytes=>100000
}
if ( [type] =~ "XXXX-log"){
mutate{
rename =>{
"[beat][hostname]" => "server_name"
}
}
grok {
match => ["message", ".* \w+=(?<request_id>\S+), \w+=(?\S+), \w+=(?\S+), \w+=(?\S+), \w+=(?\S+), \w+=(?\S+), \w+=(?\S+), \w+=(?.)"]
remove_field => ["beat", "tags", "message", "source", "offset", "prospector"]
}
}else if( [type] =~ "nginx-log"){
mutate{
rename =>{
"[beat][hostname]" => "server_name"
}
}
grok {
match => ["message", "(?\S+) - (?\S+) [(?\S+)] "(?\S+) (?\S+) (?\S+)" (?\d+) (?\d+) "(?.
)" "(?\S+)""]
remove_field => ["offset","prospector","beat","source","tags","message"]
}
}else{
drop {}
}
if( [url]=="/" ){
drop {}
}
ruby {
code => "event.timestamp.time.localtime"
}
}

output {
elasticsearch {
hosts => ["XXX1"]
index => "XXXX-%{+YYYY.MM.dd}"
}
udp{
host => 'XXX2'
port => 514
}
}

Can somebody save me?

It seems like you can have very large messages and UDP has a strict limit, which seems to be causing the problems. I would recommend switching from using the UDP output plugin to TCP or place a conditional around it to check that only short messages (not sure what the limit is) are sent that way.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.