Logstash rubyjit error with filter

Hello,

I am experiencing an issue with Logstash throwing an error that causes it to stop running.

I get the following error:

[2017-08-11T19:20:51,987][ERROR][logstash.pipeline        ] allSite.java:154)", "rubyjit.LogStash::Filters::Json$$filter_40ff8d4c1569dcd49c050f4a5093a76d6c4d61381808253012.__file__(C:/Tempest/Tempest_Data_Caching_TEST/logstash-5.5.1/vendor/bundle/jruby/1.9/gems/logstash-filter-json-3.0.3/lib/logstash/filters/json.rb:106)", "rubyjit.LogStash::Filters::Json$$filter_40ff8d4c1569dcd49c050f4a5093a76d6c4d61381808253012.__file__(C:/Tempest/Tempest_Data_Caching_TEST/logstash-5.5.1/vendor/bundle/jruby/1.9/gems/logstash-filter-json-3.0.3/lib/logstash/filters/json.rb)", "org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:201)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:336)", "org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:179)", "org.jruby.runtime.callsite.CachingCallSite.call(CachingCallSite.java:183)", "rubyjit.LogStash::Filters::Base$$do_filter_8e8403dcfdf01a35ffca12ed35ec4e79455489071808253012.__file__(C:/Tempest/Tempest_Data_Caching_TEST/logstash-5.5.1/logstash-core/lib/logstash/filters/base.rb:145)", "rubyjit.LogStash::Filters::Base$$do_filter_8e8403dcfdf01a35ffca12ed35ec4e79455489071808253012.__file__(C:/Tempest/Tempest_Data_Caching_TEST/logstash-5.5.1/logstash-core/lib/logstash/filters/base.rb)", "org.jruby.internal.runtime.methods.JittedMethod.call(JittedMethod.java:201)", "org.jruby.runtime.callsite.CachingCallSite.cacheAndCall(CachingCallSite.java:336)", "org.jruby.runtime.callsite.CachingCallSite.callBlock(CachingCallSite.java:179)", "org.jruby.runtime.callsite.CachingCallSite.callIter(CachingCallSite.java:188)", "rubyjit.LogStash::Filters::Base$$multi_filter_6c0dea8219a042f89f5a5b41e60697f8088a7c451808253012.block_0$RUBY$__file__(C:/Tempest/Tempest_Data_Caching_TEST/logstash-5.5.1/logstash-core/lib/logstash/filters/base.rb:164)", "rubyjit$LogStash::Filters::Base$$multi_filter_6c0dea8219a042f89f5a5b41e60697f8088a7c451808253012$block_0$RUBY$__file__.call(rubyjit$LogStash::Filters::Base$$multi_filter_6c0dea8219a042f89f5a5b41e60697f8088a7c451808253012$block_0$RUBY$__file__)", "org.jruby.runtime.CompiledBlock19.yield(CompiledBlock19.java:135)", "org.jruby.runtime.Block.yield(Block.java:142)", "org.jruby.RubyArray.eachCommon(RubyArray.java:1606)", "org.jruby.RubyArray.each(RubyArray.java:1613)"

Can anyone shed some light as to what may be causing the error? are the log lines too big? Is there something wrong with the filter I am using?

Could there be an exception that is not being caught when logstash attempts to mutate a field?

Any help would be very much appreciated.

Edit: I could not post the rest of the error message as it exceeds the character limit.

I don't know how to post my config, it exceeds the character limit allowed on this forum, so here is a condensed version:

input {
	beats {
		host => "127.0.0.1"
		port => "5044"
	}
}

filter {
	json {
		source => "message"
	}
	
	ruby {
		code => "event.set('filename', (event.get('source').split('\\').last).split('.').first)"
	}
	
	if ![COND_DATE]  {
		mutate {
			add_field => {"COND_DATE" => "%{Date}"}
		}
	}

	mutate {
		add_field => { "COND_DATE_TIME" => "%{COND_DATE} %{COND_TIME}"} 
		remove_field => ["message", "COND_DATE", "COND_TIME"]
	}

	date {
		match => ["COND_DATE_TIME", "yyyyMMdd hh:mm:ss a"]
		target => "COND_DATE_TIME"
	}
	
	date {
		match => ["COND_DATE_TIME", "yyyyMMdd HH:mm:ss"]
		target => "COND_DATE_TIME"
	}
	
	date {
		match => ["COND_DATE_TIME", "MM-dd-yyyy hh:mm:ss a"]
		target => "COND_DATE_TIME"
	}
	
	date {
		match => ["COND_DATE_TIME", "MM-dd-yyyy HH:mm:ss"]
		target => "COND_DATE_TIME"
	}

	mutate { convert => ["COND_TEMP_C", "float"] }
	mutate { convert => ["MEAS_TEMP_C", "float"] }
	mutate { convert => ["COND_LOOP", "integer"] }
	mutate { convert => ["COND_TEST_NUMBER", "integer"] }
	mutate { convert => ["COND_FAIL_BIN", "integer"] }
	mutate { convert => ["COND_VDD_V", "float"] }
	mutate { convert => ["COND_VCTRL_HI_V", "float"] }
	mutate { convert => ["COND_VSS_V", "float"] }
	mutate { convert => ["COND_VDD_I_LIMIT_mA", "float"] }
	mutate { convert => ["COND_VCTRL_I_LIMIT_uA", "float"] }
	mutate { convert => ["COND_VSS_I_LIMIT_mA", "float"] }
	mutate { convert => ["COND_REF_TEST_NUM", "integer"] }
	mutate { convert => ["COND_LOWER_FREQ_MHz", "float"] }
	mutate { convert => ["COND_IF_PWR_dBm", "float"] }
	mutate { convert => ["COND_RF_FREQ_Mhz", "float"] }
	mutate { convert => ["MEAS_RF_PWR_dBm", "float"] }
	mutate { convert => ["MEAS_4FO_OUTPUT_PWR_AVG_dBm", "float"] }
	mutate { convert => ["MEAS_4FO_OUTPUT_PWR_PEAK_dBm", "float"] }
	mutate { convert => ["COND_IDAC_LIMIT_mA", "float"] }
	mutate { convert => ["COND_DAC_FORCED_CURRENT_mA", "float"] }
	mutate { convert => ["COND_DAC_FORCED_VOLTAGE_V", "float"] }
	mutate { convert => ["MEAS_DAC_VOLTAGE_mA", "float"] }
	mutate { convert => ["MEAS_DAC_CURRENT_mA", "float"] }
	mutate { convert => ["MEAS_VEN_V", "float"] }
	mutate { convert => ["MEAS_VMOD_V", "float"] }
	mutate { convert => ["MEAS_MIN_WRITE_SCLK_PERIOD_MHZ", "float"] }
	mutate { convert => ["MEAS_MIN_WRITE_SCLK_PERIOD_NS", "float"] }
	mutate { convert => ["MEAS_MIN_READ_SCLK_PERIOD_MHZ", "float"] }
	mutate { convert => ["MEAS_MIN_READ_SCLK_PERIOD_NS", "float"] }
	mutate { convert => ["MEAS_SCLK_MIN_HIGH_TIME", "float"] }
	mutate { convert => ["MEAS_SCLK_MIN_LOW_TIME", "float"] }
	mutate { convert => ["MEAS_IIH_uA", "float"] }
	mutate { convert => ["MEAS_IIL_uA", "float"] }
	mutate { convert => ["MEAS_VOH", "float"] }
	mutate { convert => ["MEAS_VOL", "float"] }
	mutate { convert => ["MEAS_IDDQ", "float"] }
	mutate { convert => ["MEAS_SDATA_SETUP", "float"] }
	mutate { convert => ["MEAS_SDATA_HOLD", "float"] }
	mutate { convert => ["MEAS_SDATA_STABILIZATION", "float"] }
}

output {
	elasticsearch {
		hosts => ["automationdev:9200"]
		index => "benchmark-00-system-tests"
		document_type => "products"
		document_id => "%{filename}_%{row_id}"
	}
	stdout {}
}

My log lines look like this:

{"COND_SERIAL_NUMBER":"100","COND_LOT_NUMBER":"124","COND_DATE":"20170101","COND_TIME":"11:02:03 AM", ... ,"Operator":"administrator","Station":"TEST-01","Date":"08-01-2017","Start Time":"11:00:00","row_id":0}

Note that the '...' above denotes other possible fields; these logs come from different machines. Different machines produce similar fields (such as COND_SERIAL_NUMBER, Date, Start Time, etc), but can contain different fields, which is why my config filter is lengthy. A log line can contain up to 300 fields. So the original config contains about 200 or so more field mutations just like what you see above.

I may have figured out the issue, the problem was with the log files. Some of the fields that I am converting from string to float/integer/boolean had empty string ("") value assigned to them. Tweaked the config to remove those fields all together to avoid the error.

Edit: also moved
remove_field => ["message", "COND_DATE", "COND_TIME"]
to it's own mutate filter at the very end.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.