/dev/null is not a directory, so I would expect that if logstash tried to create a file inside it then it would get an exception.
However, my understanding is that each file in the DLQ is limited by the maximum segment size (fixed at 10MB) and the overall queue size is limited by the maximum queue size (1GB) which you can change.
I tried 30MB, and get this: (there are 3 10MB files as you suggested)
Jun 24 12:02:00 hostname logstash[7379]: message repeated 3 times: [ [2020-06-24T12:02:00,143][ERROR][org.logstash.common.io.DeadLetterQueueWriter] cannot write event to DLQ: reached maxQueueSize of 31457280]
Jun 24 12:02:00 hostname logstash[7379]: [2020-06-24T12:02:00,144][ERROR][org.logstash.common.io.DeadLetterQueueWriter] cannot write event to DLQ: reached maxQueueSize of 31457280
I'm not sure how deadletterqueue tracks, but it took me a restart and deleting the files to get it to write again to the queue again. I'm not sure if the events are just discarded at this point, or if its trying 5 times then giving up for each event? I was hoping it would just over write older events, but I guess thats not the case.
Also it appears its 30MB(max size) per pipeline.
You were right about /dev/null as well, Logstash refused to start up.
I think this is better then permanent 400's looping because things seems to be processing, even though its slow. With the 400 it was also a lot more difficult to track down my problem entries, where with a DLQ they are readible in the 1.log, 2.log.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.