To compensate the logging schema violations and to get Elasticsearch indexing happy I am trying to catch log lines where the field message_json.msgObject.products
is an array of strings instead of array of hashes. In such a case, I am moving the strings into individual hashes with id
key.
The Logstash filter looks like that:
if [message_json][msgObject][products] {
ruby {
code => "
products = event.get('[message_json][msgObject][products]')
if (products.length > 0 and products[0].is_a?(String))
products.map!{|product| {'id': product}}
end
}
event.set('[message_json][msgObject][products]', products)
"
}
}
That filter did not cause any issue when it was deployed to Logstash from docker.elastic.co/logstash/logstash:5.6.6.
However, when I changed the Logstash container to docker.elastic.co/logstash/logstash:5.6.7, I started to get errors during startup:
SyntaxError: (ruby filter code):5: syntax error, unexpected ':'
products.map!{|product| {'id': product}}
^
eval at org/jruby/RubyKernel.java:1079
register at /usr/share/logstash/vendor/bundle/jruby/1.9/gems/logstash-filter-ruby-3.1.3/lib/logstash/filters/ruby.rb:59
register at /usr/share/logstash/vendor/jruby/lib/ruby/1.9/forwardable.rb:201
register_plugin at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:290
register_plugins at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301
each at org/jruby/RubyArray.java:1613
register_plugins at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:301
start_workers at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:311
run at /usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:235
start_pipeline at /usr/share/logstash/logstash-core/lib/logstash/agent.rb:408
It looks like jruby version did not change but ruby plugin is handled differently. Am I doing something stupid?