I put in a filter recently which seemed to be working fine at first, but it would crash logstash every couple of hours with the following message:
{
"level": "FATAL",
"logEvent": {
"error": {
"metaClass": {
"metaClass": {
"metaClass": {
"backtrace": [
"org/jruby/RubyString.java:4462:in `include?'",
"(eval):274760:in `initialize'",
"org/jruby/RubyArray.java:1613:in `each'",
"(eval):274758:in `initialize'",
"org/jruby/RubyProc.java:281:in `call'",
"(eval):21975:in `filter_func'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:398:in `filter_batch'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:379:in `worker_loop'",
"/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:342:in `start_workers'"
],
"error": "can't convert Fixnum into String"
}
}
}
},
"message": "An unexpected error occurred!"
},
"loggerName": "logstash.runner",
"thread": "LogStash::Runner",
"timeMillis": 1542319501747
}
The filter that errors looks like this:
filter {
if [type] == "elasticsearch-index-name" {
grok {
patterns_dir => '/etc/logstash/patterns/custom-pattern/'
match => [ "message", "%{CUSTOM_GROK}" ]
}
if !("_grokparsefailure" in [tags]) {
date {
match => ["timestamp", "MMM dd HH:mm:ss", "MMM d HH:mm:ss" ]
}
ruby {
code => "event.set('elk_lag', event.get('elk_recv_timestamp')-event.get('@timestamp'))"
}
json {
source => "data"
}
}
mutate {
add_field => { "rotation_period" => "{{ rotation_period }}" }
add_field => { "tgt_index" => "elasticsearch-index-name" }
}
}
It stops breaking if i revert the filter to this:
filter {
if [type] == "elasticsearch-index-name" {
mutate {
add_field => { "rotation_period" => "{{ rotation_period }}" }
add_field => { "tgt_index" => "elasticsearch-index-name" }
}
}
}
If i feed a log file that would break it through the input plugin everything works fine. The only problem is when i start logstash in production with all the other filters things break.