Logstash DLQ not starting up (Dockerized LS)

I have a Dockerized LS setup working on a pipeline (Logstash Docker 7.13)
I added the DLQ settings via environment variables to the LS container and run another pipeline in the same container (named as "dlq-mypipeline")
I do see the Dead Letter Queue files generated inside the path.data specified (1.log.tmp) but the DLQ pipeline refuses to start with the below errors.
(The main pipeline aka "mypipeline" loads fine)

[ERROR][logstash.agent           ] Failed to execute action {:id=>:"dlq-mypipeline", :action_type=>LogStash::ConvergeResult::FailedAction, :message=>"Could not execute action: PipelineAction::Create<dlq-mypipeline>, action_result: false", :backtrace=>nil}
[ERROR][logstash.javapipeline    ] Pipeline error {:pipeline_id=>"dlq-mypipeline", :exception=>java.lang.NullPointerException, :backtrace=>["org.logstash.common.io.DeadLetterQueueReader.seekToNextEvent(org/logstash/common/io/DeadLetterQueueReader.java:98)", "org.logstash.input.DeadLetterQueueInputPlugin.register(org/logstash/input/DeadLetterQueueInputPlugin.java:76)", "jdk.internal.reflect.GeneratedMethodAccessor281.invoke(jdk/internal/reflect/GeneratedMethodAccessor281)", "jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(jdk/internal/reflect/DelegatingMethodAccessorImpl.java:43)", "java.lang.reflect.Method.invoke(java/lang/reflect/Method.java:566)", "org.jruby.javasupport.JavaMethod.invokeDirectWithExceptionHandling(org/jruby/javasupport/JavaMethod.java:441)", "org.jruby.javasupport.JavaMethod.invokeDirect(org/jruby/javasupport/JavaMethod.java:305)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_dead_letter_queue_minus_1_dot_1_dot_5.lib.logstash.inputs.dead_letter_queue.register(/usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-dead_letter_queue-1.1.5/lib/logstash/inputs/dead_letter_queue.rb:55)", "usr.share.logstash.vendor.bundle.jruby.$2_dot_5_dot_0.gems.logstash_minus_input_minus_dead_letter_queue_minus_1_dot_1_dot_5.lib.logstash.inputs.dead_letter_queue.RUBY$method$register$0$__VARARGS__(usr/share/logstash/vendor/bundle/jruby/$2_dot_5_dot_0/gems/logstash_minus_input_minus_dead_letter_queue_minus_1_dot_1_dot_5/lib/logstash/inputs//usr/share/logstash/vendor/bundle/jruby/2.5.0/gems/logstash-input-dead_letter_queue-1.1.5/lib/logstash/inputs/dead_letter_queue.rb)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.register_plugins(/usr/share/logstash/logstash-core/lib/logstash/java_pipeline.rb:228)", "org.jruby.RubyArray.each(org/jruby/RubyArray.java:1809)", "usr.share.logstash.logstash_minus_core.lib.logstash.java_pipeline.register_plugins(/usr/share/l

Here is the simple DLQ pipeline

input {
  dead_letter_queue {
    path => "/custompath/data/dead_letter_queue" 
    commit_offsets => true 
    pipeline_id => "mypipeline"

output {
  stdout {
    codec => rubydebug { metadata => true }

Looking at the code, in DeadLetterQueueReader.java, an NPE at line 98 is going to happen if segments is an empty list. Looking at DeadLetterQueueWriter.java that would appear to happen if there were no .log files in the DLQ directory.

What I suspect is that you cannot start a dead_letter_queue input if there are no events in the DLQ. That would clearly be a bug (probably easy to fix) but I think it is credible that nobody has ever found it. (A very superficial scan of the CI tests did not find one for it.) My guess is people usually write their DLQ processing after they realize there are a bunch of events in the DLQ.

Actually I think it is worse than that, there can be data in the DLQ but if the first .tmp file has not been finalised then there will be no .log file. That raises other possible conditions where temp files are not finalised during shutdown or crashes etc. I am not going to invest time in reading the code for that.

Oh, and this being docker, make sure the directories you think are mounted really are mounted. That one comes up again and again.

1 Like

Thanks for the info @Badger! Yes, i don't think there is any data in the DLQ yet.
I did mount an external volume to the docker so that the pipeline subpath directory is already present when the DLQ pipeline attempts to start.

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.