Logstash using protobuf as http input codec

Hi everyone. I'm facing some issues using a dockerized logstash when I use an http input with protobuf as codec.

I'm using logstash (6.2.3) to receive some dynatrace's Business Transactions and then route them to different elasticsearch, but for now the output is send to a file to check if it works. BT uses protobuf to send the info, so I have install the logstash-codec-protobuf.

[root@xxxx bin]# logstash-plugin list --verbose | grep protobuf
logstash-codec-protobuf (1.0.5)

My logstash configuration is:

[root@SCMADEX018 pipeline]# cat logstash.conf 
input { 
 http {
    port => 9280 
    codec => protobuf
    { 
     	class_name => [ "Export::Bt::BtOccurrence", "Export::Bt::BusinessTransactions", "Export::Bt:BusinessTransaction"] 
        include_path => ['/usr/share/logstash/protobuf/dyna.pb.rb'] 
    } 
    threads => 4 
    type => "protobuf_http" 
 } 
} 

output 
{ 
  file { path => "/tmp/testing-out-%{+YYYY.MM.dd}" } 
  stdout { codec => rubydebug { metadata => true} } 
}

The .pb.rb definition file had been generated with the ruby-protoc compilator and the original protobuf file had been downloaded from the dynatrace website. That file is in that path.

When the docker stars, it suddenly stops with these messages in the log:

[2018-03-27T06:34:26,069][ERROR][logstash.plugins.registry] Tried to load a plugin's code, but failed. {:exception=>#<LoadError: no such file to load -- google/protobuf>, :path=>"logstash/codecs/protobuf", :type=>"codec", :name=>"protobuf"}
[2018-03-27T06:34:26,113][ERROR][logstash.agent           ] Failed to execute action {:action=>LogStash::PipelineAction::Create/pipeline_id:main, :exception=>"LogStash::PluginLoadingError", :message=>"Couldn't find any codec plugin named 'protobuf'. Are you sure this is correct? Trying to load the protobuf codec plugin resulted in this error: no such file to load -- google/protobuf", :backtrace=>["/usr/share/logstash/logstash-core/lib/logstash/plugins/registry.rb:192:in `lookup_pipeline_plugin'", "/usr/share/logstash/logstash-core/lib/logstash/plugin.rb:140:in `lookup'", "/usr/share/logstash/logstash-core/lib/logstash/plugins/plugin_factory.rb:81:in `plugin'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:112:in `plugin'", "(eval):8:in `<eval>'", "org/jruby/RubyKernel.java:994:in `eval'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:84:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline.rb:169:in `initialize'", "/usr/share/logstash/logstash-core/lib/logstash/pipeline_action/create.rb:40:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:315:in `block in converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:312:in `block in converge_state'", "org/jruby/RubyArray.java:1734:in `each'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:299:in `converge_state'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:166:in `block in converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:141:in `with_pipelines'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:164:in `converge_state_and_update'", "/usr/share/logstash/logstash-core/lib/logstash/agent.rb:90:in `execute'", "/usr/share/logstash/logstash-core/lib/logstash/runner.rb:348:in `block in execute'", "/usr/share/logstash/vendor/bundle/jruby/2.3.0/gems/stud-0.0.23/lib/stud/task.rb:24:in `block in initialize'"]}

It is assumed that the plugin is properly installed because it appears in the plugin list as I showed you before.

Any tip?

I did some trick to got it working.

First of all, entering the container, I change some file permissions to logstash user. For instance the protobuf definition and the logstash configuration files. And move the protobuf definition file to /config/protobuf just to be cleared.

Then, without restarting the docker, I did some more changes in the logstash configuration file: Changing the port (http input) and the class_name (http input codec)

With the changes made, I run another instance inside the docker defining another path to avoid a conflict with the running logstash

/usr/share/logstash/bin/logstash --path.data /tmp

With this trick I have two different instances of logstash running, the first one is the one with the configuration without codec and is not in use, and the second one got the codec configuration and I started it from inside the docker that is the working one.

Regards!

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.