Hi everyone,
I intend to use dead letter queue on log event failure from sending log into mongodb use logtash.
My config includes:
logtash.yml
http.host: "0.0.0.0"
pipeline.batch.delay: 10
pipeline.batch.size: 10
pipeline.ecs_compatibility: disabled
dead_letter_queue.enable: true
pipelines.yml
- pipeline.id: backup_elastics
pipeline.ecs_compatibility: disabled
path.config: "/usr/share/logstash/pipeline/dead-letter-queue.conf"
- pipeline.id: "read-file-log"
pipeline.ecs_compatibility: disabled
path.config: "/usr/share/logstash/pipeline/read-file.conf"
with config 2 pipelines, one for send log into mongodb
input{
file {
type => "dummylog"
path => ["/home/data/*.log"]
}
}
output{
if [type] == "dummylog" {
mongodb {
collection => "logtash_mongodb"
database => "dmp_log"
uri => "mongodb://logtash:logtash@mongodb:27017/dmp_log"
codec => "json"
}
}
stdout {
codec => rubydebug
}
}
the second will write into elastics if any failed log event
input {
dead_letter_queue {
id => "backup_elastics"
pipeline_id => "read-file-log"
commit_offsets => true
path => "/usr/share/logstash/data/dead_letter_queue/"
}
}
output {
elasticsearch {
hosts => [ "elasticsearch:9200" ]
index => "logs-%{+YYYY.MM.dd}"
manage_template => false
}
stdout {
codec => rubydebug
}
}
but when mongo got problem, it's not send into the dead letter queue.
logtash-mongo-logstash-1 | [WARN ] 2022-10-07 04:59:56.849 [[read-file-log]>worker1] mongodb - Failed to send event to MongoDB, retrying in 3 seconds {:event=>#<LogStash::Event:0x31ec7696>, :exception=>#<Mongo::Error::NoServerAvailable: No server is available matching preference: #<Mongo::ServerSelector::Primary:0x7a355ad8 @tag_sets=[], @server_selection_timeout=30, @options={:database=>"dmp_log", :user=>"logtash", :password=>"logtash"}>>}
logtash-mongo-logstash-1
I'm using logtash 7.17.6, logtash output mongodb version 3.1.5.
So any config above wrong? or how can i catch the error from first pipeline(logtash-mongodb) to process failed event?
Thank you.