Hi all,
I'm currently trying to introduce the dead-letter-plugin to the ELK stack that we use for collecting application logs.
We've been having some issues with the logging functionality breaking down and losing some requests, hence the idea for adding a DLQ to the pipeline to at least have a peek into what may be causing us troubles.
Here's the basic setup that I've got running locally. The whole stack is dockerized.
logstash.conf
input {
beats {
port => 5044
}
}
filter {
json {
source => "message"
}
date{
match => ["timestamp", "UNIX_MS"]
target => "@timestamp"
}
ruby {
code => "event.set('indexDay', event.get('[@timestamp]').time.localtime('+09:00').strftime('%Y%m%d'))"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
template => "/usr/share/logstash/templates/logstash.template.json"
template_name => "logstash"
template_overwrite => true
index => "logstash-%{indexDay}"
codec => json
}
stdout {
codec => rubydebug
}
}
logstash_dlq.conf
input {
dead_letter_queue {
path => "/usr/share/logstash/data/dead_letter_queue"
commit_offsets => true
pipeline_id => "main"
}
}
output {
elasticsearch {
hosts => ["elasticsearch:9200"]
index => "logstash-dlq-%{indexDay}"
codec => json
}
}
logstash.yml
dead_letter_queue:
enable: true
The whole thing seems to be working, but not really how I'd expect it. All the logs that are being processed by logstash end up in both indices e.g. logstash-20220817 and logstash-dlq-20220817. Also the logstash's directory for keeping the logs data/dead_letter_queue/main has only one entry available - 1.log and its whole content is '1'.
I'd appreciate any tips that might help me set this up properly
Cheers,
Adam