LS forwarder erroring out on the LS server


(Tim Dunphy) #1

Hey guys,

We have logstash forwarder setup on a bunch of nodes. And initially we had results turn up in the logstash interface when you did a search.

However the servers that were running forwarder stopped turning up in results in the kibana - logstash interface, and, and we started seeing these errors turn up in the logstash logs:

{:timestamp=>"2015-07-16T14:15:19.148000-0400", :message=>"An error occurred. Closing connection", :client=>"3.3.81.28:41584", :exception=>#<LogStash::ShutdownSignal: LogStash::ShutdownSignal>, :backtrace=>["org/jruby/RubyIO.java:2996:in `sysread'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-0.1.5/lib/logstash/inputs/tcp.rb:164:in `read'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-0.1.5/lib/logstash/inputs/tcp.rb:112:in `handle_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-0.1.5/lib/logstash/inputs/tcp.rb:147:in `client_thread'"], :level=>:error}
{:timestamp=>"2015-07-16T14:15:19.148000-0400", :message=>"An error occurred. Closing connection", :client=>"3.3.81.115:45836", :exception=>#<LogStash::ShutdownSignal: LogStash::ShutdownSignal>, :backtrace=>["org/jruby/RubyIO.java:2996:in `sysread'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-0.1.5/lib/logstash/inputs/tcp.rb:164:in `read'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-0.1.5/lib/logstash/inputs/tcp.rb:112:in `handle_socket'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-input-tcp-0.1.5/lib/logstash/inputs/tcp.rb:147:in `client_thread'"], :level=>:error}

Does this provide any clues on how to solve this problem? Are there any troubleshooting steps you can recommend?

Thanks


Logstash falls from time to time. tcp input module error
(Mark Walkom) #2

Looks like it received a shutdown signal from somewhere?
Anything in your OS logs showing this?


(Tim Dunphy) #3

Hey Warkolm,

Sorry for the delayed response. I've been on vacation for a week and then just been busy.

But it's been a while since we've sen this issue. It hasn't happened since. I'll let you know if we have it again.

Thanks,

Tim


(system) #4