Dead Letter - cannot write event to DLQ: reached maxQueueSize of 2147483648

Hi Team,
I facing syncing issue in Elasticsearch and when i check my logstash log and i observer the below error
"cannot write event to DLQ: reached maxQueueSize of 2147483648".

whether its causing the syncing issue and how to resolve it.

Can anyone help me on this .

Hello Ganesh,

There is an open issue regarding cleaning the deadletterqueue here: https://github.com/elastic/logstash/issues/8795

If you do not need the data from the deadletter queue you may ignore the error - in this case LogStash does not write to the DLQ anymore.
But I guess you want to read the data from the DLQ so you would have to:

  • Create a Logstash pipeline for extracting the DQL contents
  • shutdown Logstash
  • remove the files from the DLQ directory
  • start LogStash again

A pipeline extracting the contents might look like this(You have to modify the paths and the pipeline id):

input {
  dead_letter_queue {
    path => "/logserver/data/data-logstash/dead_letter_queue" 
    commit_offsets => true 
    pipeline_id => "dummy" 
  }
}
output {
 file {
   path => "/logserver/applications/logs/dead_letter_queue"
   codec => rubydebug { metadata => true }
 }
}

This creates a file named /logserver/applications/logs/dead_letter_queue containing the original message and additional information like the reason why it was sent to the DLQ as formatted JSON.

I hope this helps.

Best regards
Wolfram

Thanks for your reply @Wolfram_Haussig,

My dead letter queue binding all the message in single line and how can i segregate it and index into Elasticsearch.

Please find the dead_letter message below

2020-06-26T12:05:04.682Z¦qjava.util.HashMap¦dDATA¦xorg.logstash.ConvertedMap¦cenv¦torg.jruby.RubyStringbpr¦hfacility¦torg.jruby.RubyStringflocal1¦hseverity¦torg.jruby.RubyStringfnotice¦hfilename¦torg.jruby.RubyStringx7cache-sss.log¦kdata_centre¦torg.jruby.RubyStringdWest¦ctag¦torg.jruby.RubyStringkapplication¦dport¦kprogramname¦torg.jruby.RubyStringkapplication¦dhost¦torg.jruby.RubyStringk10.xx.xx.1¦j@timestamp¦vorg.logstash.Timestampx2020-06-26T11:45:31.609Z¦jsysloghost¦torg.jruby.RubyStringlaaaapppddddcc28¦hhostname¦torg.jruby.RubyStringlcaaaapppddddcc28¦ehnnum¦torg.jruby.RubyStringb28¦fprocid¦torg.jruby.RubyStringa-¦kenvironment¦torg.jruby.RubyStringjProduction¦fhnpref¦torg.jruby.RubyStringgcbmrmmr¦gmessage¦torg.jruby.RubyStringx0 To: Sun Aug 09 15:15:35 IST 2020]¦h@version¦torg.jruby.RubyStringa1¦eappid¦torg.jruby.RubyStringhmulesoft¦glogtype¦torg.jruby.RubyStringcapp¦¦¦dMETA¦xorg.logstash.ConvertedMaelasticsearch¦Could not index event to Elasticsearch. status: 400, action: ["index", {:_id=>nil, :_index=>"mulesoft-pr-logstash-2020.06", :_type=>"doc", :_routing=>nil}, #<LogStash::Event:0x3b0a2b84>], response: {"index"=>{"_index"=>"mulesoft-pr-logstash-2020.06", "_type"=>"doc", "_id"=>"t6eE8HIBQCAYDMTgVruI", "status"=>400, "error"=>{"type"=>"illegal_argument_exception", "reason"=>"number of documents in the index cannot exceed 2147483519"}}}c¦¦¦¦?¦¦

Hello Ganesh,

Which version are you on? This looks different than I am used to. What is interesting is the message:

number of documents in the index cannot exceed 2147483519

This error message is new to me but I guess it has to do with a limited amount of documents a single shard within an index can support. How many primary shards does the index have?

Best regards
Wolfram

@Wolfram_Haussig currently we are using 6.4 version and we have 1 primary shards for each index.

Do you have any input for this issue @Wolfram_Haussig

Hello Ganesh,

I think you need to add more shards to your index. As it is not possible to change the number of shards of an existing index you need to create a new one increasing the primary shard count.

Then you could reindex the old index into the new one and after this migration you can delete the old index.

Best regards
Wolfram

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.