Logstash service crash 5.6.1

Hi,
I used Centos 7 with elk stack 5.6.1
Logstash service crash when restarted. I notice that from logstash configuration only input with tcp do not work:

input {
    tcp {
    port => "5759"
    type => "iis"
    codec => json {
    charset => "ASCII"
        }
    }

But udp logs are working fine:

 udp {
    port => "5757"
    type => "tnapplogs"
    codec => plain {
    charset => "ASCII"
        }
    }

All ELK stack worked 2 months without problems, but yesterday logstash crashed.

This is the error when logstash service was restarted:

● logstash.service - logstash
   Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
   Active: active (running) since Fri 2018-04-06 17:54:09 EEST; 9min ago
 Main PID: 4233 (java)
   CGroup: /system.slice/logstash.service
           └─4233 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx2g -Xms2g -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at LogStash::Util::WrappedSynchronousQueue::ReadBatch.each(/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:228)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at LogStash::Util::WrappedSynchronousQueue::ReadBatch.each(/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:228)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at org.jruby.RubyHash.each(org/jruby/RubyHash.java:1342)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at LogStash::Util::WrappedSynchronousQueue::ReadBatch.each(/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:227)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at LogStash::Util::WrappedSynchronousQueue::ReadBatch.each(/usr/share/logstash/logstash-core/lib/logstash/util/wrapped_synchronous_queue.rb:227)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at LogStash::Pipeline.filter_batch(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:398)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at LogStash::Pipeline.filter_batch(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:398)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at RUBY.worker_loop(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:379)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:342)
Apr 06 17:56:20 amslogelk1.tradenetworks.ams logstash[4233]: at java.lang.Thread.run(java/lang/Thread.java:748)

Could you please help me !

Can you post the full logs from starting until the exceptions happened? I think systemd is hiding part of the stacktrace there.

Hi,
I update logstash to 5.6.5, but the problem still persist.
This is all I see when restart logstash service. In first minutes service started

systemctl status logstash -l
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2018-04-07 01:28:32 EEST; 28s ago
Main PID: 9894 (java)
CGroup: /system.slice/logstash.service
└─9894 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

Apr 07 01:28:32 amslogelk1.tradenetworks.ams systemd[1]: Started logstash.
Apr 07 01:28:32 amslogelk1.tradenetworks.ams systemd[1]: Starting logstash...
Apr 07 01:28:44 amslogelk1.tradenetworks.ams logstash[9894]: Sending Logstash's logs to /var/log/logstash which is now configured via log4j2.properties

After few minutes that service crashed

systemctl status logstash -l
● logstash.service - logstash
Loaded: loaded (/etc/systemd/system/logstash.service; enabled; vendor preset: disabled)
Active: active (running) since Sat 2018-04-07 01:12:50 EEST; 10min ago
Main PID: 7859 (java)
CGroup: /system.slice/logstash.service
└─7859 /usr/bin/java -XX:+UseParNewGC -XX:+UseConcMarkSweepGC -XX:CMSInitiatingOccupancyFraction=75 -XX:+UseCMSInitiatingOccupancyOnly -XX:+DisableExplicitGC -Djava.awt.headless=true -Dfile.encoding=UTF-8 -XX:+HeapDumpOnOutOfMemoryError -Xmx1g -Xms256m -Xss2048k -Djffi.boot.library.path=/usr/share/logstash/vendor/jruby/lib/jni -Xbootclasspath/a:/usr/share/logstash/vendor/jruby/lib/jruby.jar -classpath : -Djruby.home=/usr/share/logstash/vendor/jruby -Djruby.lib=/usr/share/logstash/vendor/jruby/lib -Djruby.script=jruby -Djruby.shell=/bin/sh org.jruby.Main /usr/share/logstash/lib/bootstrap/environment.rb logstash/runner.rb --path.settings /etc/logstash

Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at RUBY.initialize((eval):1539)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at RUBY.initialize((eval):1532)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at RUBY.filter_func((eval):688)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at LogStash::Pipeline.filter_batch(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:398)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at LogStash::Pipeline.filter_batch(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:398)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at RUBY.worker_loop(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:379)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:342)
Apr 07 01:19:24 amslogelk1.tradenetworks.ams logstash[7859]: at java.lang.Thread.run(java/lang/Thread.java:748)

I found in logstash log error message when service crashed:

[ERROR][logstash.pipeline ] Exception in pipelineworker, the pipeline stopped processing new events, please check your filter configuration and restart Logstash. {"exception"=>"Expected input field value to be String or List type", "backtrace"=>["org.logstash.filters.GeoIPFilter.handleEvent(org/logstash/filters/GeoIPFilter.java:115)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:498)", "LogStash::Filters::GeoIP.filter(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.3.1-java/lib/logstash/filters/geoip.rb:125)", "LogStash::Filters::GeoIP.filter(/usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-geoip-4.3.1-java/lib/logstash/filters/geoip.rb:125)", "LogStash::Filters::Base.do_filter(/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:145)", "LogStash::Filters::Base.do_filter(/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:145)", "LogStash::Filters::Base.multi_filter(/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:164)", "LogStash::Filters::Base.multi_filter(/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:164)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)", "LogStash::Filters::Base.multi_filter(/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:161)", "LogStash::Filters::Base.multi_filter(/usr/share/logstash/logstash-core/lib/logstash/filters/base.rb:161)", "LogStash::FilterDelegator.multi_filter(/usr/share/logstash/logstash-core/lib/logstash/filter_delegator.rb:46)", "LogStash::FilterDelegator.multi_filter(/usr/share/logstash/logstash-core/lib/logstash/filter_delegator.rb:46)", "RUBY.initialize((eval):1539)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1613)", "RUBY.initialize((eval):1532)", "org.jruby.RubyProc.call(org/jruby/RubyProc.java:281)", "RUBY.filter_func((eval):688)", "LogStash::Pipeline.filter_batch(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:398)", "LogStash::Pipeline.filter_batch(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:398)", "RUBY.worker_loop(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:379)", "RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:342)", "java.lang.Thread.run(java/lang/Thread.java:748)"]}

The GeoIP filter is crashing while handling an event. This definitely should fail more gracefully, but (a) what does your geoip filter configuration look like and (b) do you have an example event that is causing it to fail?

Looking at the plugin source, this error is raised when the source field for the geoip lookup is present, but contains something other than a string (or array of strings)

This is my filter configuration in logstash config file

geoip {
source => "X-Forwarded-For"
target => "geoip"
database => "/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}

I do not have an example. I see only this messages when restart logstash service in logstash log

Hi,
we changed geoip configuration in logstash config file and everything is fine for us. Logstash service works without error.

if [X-Forwarded-For] =~ "[0-9].[0-9].[0-9].[0-9]" {
geoip {
source => "X-Forwarded-For"
target => "geoip"
database => "/opt/logstash/vendor/geoip/GeoLite2-City.mmdb"
add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
}
mutate {
convert => [ "[geoip][coordinates]", "float"]
}
}

Thank you !

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.