Broken after upgrade


#1

Topic says it....I upgraded to the latest version in the 1.5.6 from the ppa and now I am seeing this:

`{:timestamp=>"2015-12-19T17:28:27.323000-0700", :message=>"Got error to send bulk of actions: Cannot find Serializer for class: org.jruby.RubyObject", :level=>:error}

{:timestamp=>"2015-12-19T17:28:27.323000-0700", :message=>"Failed to flush outgoing items", :outgoing_count=>1, :exception=>"JrJackson::ParseError", :backtrace=>["com/jrjackson/JrJacksonBase.java:78:in generate'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/jrjackson-0.3.7/lib/jrjackson/jrjackson.rb:59:indump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapters/jr_jackson.rb:20:in dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json/adapter.rb:25:indump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/multi_json-1.11.2/lib/multi_json.rb:136:in dump'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/utils.rb:102:in__bulkify'", "org/jruby/RubyArray.java:2414:in map'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/utils.rb:102:in__bulkify'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/elasticsearch-api-1.0.15/lib/elasticsearch/api/actions/bulk.rb:82:in bulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch/protocol.rb:105:inbulk'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:548:in submit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:547:insubmit'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:572:in flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:571:inflush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:219:in buffer_flush'", "org/jruby/RubyHash.java:1342:ineach'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:216:in buffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:193:inbuffer_flush'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/stud-0.0.21/lib/stud/buffer.rb:159:in buffer_receive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-output-elasticsearch-1.1.0-java/lib/logstash/outputs/elasticsearch.rb:537:inreceive'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.6-java/lib/logstash/outputs/base.rb:88:in handle'", "(eval):283:inoutput_func'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.6-java/lib/logstash/pipeline.rb:244:in outputworker'", "/opt/logstash/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.6-java/lib/logstash/pipeline.rb:166:instart_outputs'"], :level=>:warn}`

At this point logstash just fills up the log with this, stops receiving any input, and needs a kill -s 9 to get it to stop. Any hints at all would be lovely. Thank you.


(Horst Birne) #2

Hi,
try add

--verbose

as startup option, than logstash should log the relevant logs that causes the "failed to flush" exceptions


#3

Thanks....after packet capturing somewhere one of these strings causes this:

<190>Dec 21 00:00:51 ids bro_conn #separator \x09 <190>Dec 21 00:00:51 ids bro_conn #set_separator , <190>Dec 21 00:00:51 ids bro_conn #empty_field (empty) <190>Dec 21 00:00:51 ids bro_conn #unset_field - <190>Dec 21 00:00:51 ids bro_conn #path conn <190>Dec 21 00:00:51 ids bro_conn #open 2015-12-21-00-00-51 <190>Dec 21 00:00:51 ids bro_conn #fields ts uid id.orig_h id.orig_p id.resp_h id.resp_p proto service duration orig_bytes resp_bytes conn_state local_orig local_resp missed_bytes history orig_pkts orig_ip_bytes resp_pkts resp_ip_bytes tunnel_parents <190>Dec 21 00:00:51 ids bro_conn #types time string addr port addr port enum string interval count count string bool bool count string count count count count set[string]

This happens once a day when these logs rotate. We did not have this issue with 1.5.4, but not that we are on 1.5.6, this is an issue.


#4

So after rolling back to 1.5.4 here's what I see in the logs:
{:timestamp=>"2015-12-21T16:58:50.564000-0700", :message=>"IP Field contained invalid IP address or hostname", :field=>"src_ip", :event=>#<LogStash::Event:0x1d3c8326 @metadata_accessors=#<LogStash::Util::Accessors:0xb86a4db @store={}, @lut={}>, @cancelled=false, @data={"message"=>"<190>Dec 21 23:58:25 dcids bro_conn #fields\tts\tuid\tid.orig_h\tid.orig_p\tid.resp_h\tid.resp_p\tproto\tservice\tduration\torig_bytes\tresp_bytes\tconn_state\tlocal_orig\tlocal_resp\tmissed_bytes\thistory\torig_pkts\torig_ip_bytes\tresp_pkts\tresp_ip_bytes\ttunnel_parents", "@version"=>"1", "@timestamp"=>"2015-12-21T23:58:25.480Z", "type"=>"Conn", "host"=>"x.x.x.x", "unixtime"=>"<190>Dec 21 23:58:25 dcids bro_conn #fields", "uid"=>"ts", "src_ip"=>"uid", "src_port"=>"id.orig_h", "dst_ip"=>"id.orig_p", "dst_port"=>"id.resp_h", "proto"=>"id.resp_p", "service"=>"proto", "duration"=>"service", "orig_bytes"=>"duration", "resp_bytes"=>"orig_bytes", "conn_state"=>"resp_bytes", "local_orig"=>"conn_state", "local_resp"=>"local_orig", "missed_bytes"=>"local_resp", "history"=>"missed_bytes", "orig_packts"=>"history", "orig_ip_bytes"=>"orig_pkts", "resp_packts"=>"orig_ip_bytes", "resp_ip_bytes"=>"resp_pkts", "t<snipped>

Looks like indeed the #fields line caused this....interesting that 1.5.4 could handle the issue, but 1.5.6 could not.


(system) #5