Dead letter queue

Hi,

I am trying to add some additional redundancy to my RELK environment with the dead letter queue in case the connection between logstash & elasticsearch is down.

However, it seems that the dead letter queue is not working as I expected.

Expected behaviour:
Read events from redis -> try to index events to ES -> not successful (since connection to ES is down) -> save events to dead_letter_queue -> repeat

Actual behaviour:

Read events from redis -> try to index events to ES -> not successful (since connection to ES is down) -> logstash starts running -> logstash not reading any more events from redis -> redis will slowly fill up

logstash creates 1.log, 2.log etc. files to specified dead_letter_queue path with size = 1 byte when restarting the logstash service, so it is at least not a file permission issue.

Logstash version: 5.5.2-1
Elasticsearch version: 5.5.2
logstash-input-dead_letter_queue version: 1.0.6

logstash config file:

input {
redis {
host => "IP address"
data_type => "list"
key => "keyname"
}
}
filter {
grok {
patterns_dir => ["/etc/logstash/patterns"]
match => { "message" => "%{MYPATTERN}" }

}
}
output {
elasticsearch {
hosts => ["IP address:9200"]
index => "test-index-%{+dd.MM.yyyy}"
}
}

Is it possible to achieve the expected behaviour with the dead_letter_queue or does it only work with events that contain mapping errors?

Thank you in advance for the help.

You want persistent queues.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.