Logstash pipeline getting terminated

Hi Experts,
I am running Logstash in a docker container and have the following pipeline configuration.

input {
    tcp {
        port => 5000
        codec => line
    }
}
filter {
    grok {
        match => {"message" => "%{SYSLOGTIMESTAMP:time} %{DATA:stream_id} %{DATA:trace_name} %{DATA:node_name} %{NUMBER:count}# %{DATA:thread_id} %{GREEDYDATA:data}"}
        match => {"message" => "%{SYSLOGTIMESTAMP:time} %{DATA:stream_id} %{DATA:trace_name} %{DATA:node_name} %{DATA:thread_id} %{GREEDYDATA:data}"}
    }
    mutate {
      gsub => [
        # replace all forward slashes with underscore
        "node_name", "/", "_"
      ]
    }
}
output {
    file {
        path => "/usr/share/logstash/data/%{stream_id}/%{node_name}/%{trace_name}.gz"
        codec => line
        gzip => true
    }
    stdout {
        codec => rubydebug
    }
}

I have multiple clients/devices connecting to Logstash server on TCP port 5000 and sending log messages to it in plain text. I am observing that after few minutes of operation, the pipeline gets terminated with the below error message:

[2023-07-17T23:56:13,698][ERROR][logstash.javapipeline    ][main] Pipeline worker error, the pipeline will be stopped {:pipeline_id=>"main", :error=>"(ArgumentError) string contains null byte", :exception=>Java::OrgJrubyExceptions::ArgumentError, :backtrace=>["org.jruby.RubyFile.expand_path(org/jruby/RubyFile.java:845)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_file_minus_4_dot_3_dot_0.lib.logstash.outputs.file.inside_file_root?(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-file-4.3.0/lib/logstash/outputs/file.rb:171)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_file_minus_4_dot_3_dot_0.lib.logstash.outputs.file.event_path(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-file-4.3.0/lib/logstash/outputs/file.rb:177)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_file_minus_4_dot_3_dot_0.lib.logstash.outputs.file.multi_receive_encoded(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-file-4.3.0/lib/logstash/outputs/file.rb:113)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1865)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_6_dot_0.gems.logstash_minus_output_minus_file_minus_4_dot_3_dot_0.lib.logstash.outputs.file.multi_receive_encoded(/usr/share/logstash/vendor/bundle/jruby/2.6.0/gems/logstash-output-file-4.3.0/lib/logstash/outputs/file.rb:112)", "usr.share.logstash.logstash_minus_core.lib.logstash.outputs.base.multi_receive(/usr/share/logstash/logstash-core/lib/logstash/outputs/base.rb:103)", "org.logstash.config.ir.compiler.AbstractOutputDelegatorExt.multi_receive(org/logstash/config/ir/compiler/AbstractOutputDelegatorExt.java:121)", "RUBY.start_workers(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:301)"], :thread=>"#<Thread:0x720234a6@/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:131 sleep>"}

Can someone tell how to get around this exception? I checked the log messages sent from clients and some of them do contain an empty message. Is there a way to ignore them from processing in the pipeline?

Thanks,
Arinjay

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.