Environment:
Logstash 6.2.4 in docker, multi-pipeline mode
Input-Redis to Output-Logstash
I have enabled dead letter queue for my pipelines and do see mapping failure data going against appropriate pipelines in dead-letter queue folder. I'm trying to create a pipeline to read dead-letter folder and push into another elastic indice that does not have any mapping enforced. Though I do see the data going into indice, I'm not able to get @metadata which provides the actual error. I get the original message etc. I tried using rubydebug code against output(elastic), but not getting expected result. Is it possible? Please advice.
Goal: Read DLQ data, and push to Elastic indice that has no mapping defined with @metadata info that captures the error on original pipeline.
To replicate
Elastic Mapping
PUT _template/test.dlq
{
"index_patterns": ["test*"],
"mappings": {
"doc": {
"properties": {
"message": {
"type": "integer"
}
}
}
}
}
Redis entry LPUSH "test.1" "invalid"
Logstash (To process dlq as the data that go in is invalid inline with mapping)
input {
dead_letter_queue {
path => "/var/log/logstash/deadletter"
commit_offsets => true
pipeline_id => "test.1"
?? codec => rubydebug { metadata => true }
}
}
output {
elasticsearch {
hosts => ["ES:9200"]
manage_template => false
index => "test.dlq.data"
?? codec => rubydebug { metadata => true }
}
}
??-- tried both
Another Issue: Though I enabled commit_offsets, everytime I restart docker instance, it processes all the DLQ entries historically. Should I explicitly mount sharedb path?
Thanks for advice.
Tried adding below filter to dead letter processing pipeline. But, the target field resulted with value null.
mutate {
rename => {
"@metadata" => "failurereason"
}
NOTE: used codec => rubydebug { metadata => true } against dead_letter_queue input plugin.