Hi,
I am using logstash 6.2.4 with following yml settings -
pipeline.batch.size: 600
pipeline.workers: 1
dead_letter_queue.enable: true
The conf file used to run logstash application is -
input {
file {
path => "/home/administrator/Downloads/postgresql.log.2018-10-17-06"
start_position => "beginning"
}
}
filter {
grok {
match => { "message" => "%{DATESTAMP:timestamp} %{TZ}:%{IP:uip}\(%{NUMBER:num}\):%{WORD:dbuser}%{GREEDYDATA:msg}"}
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
id => 'es-1'
hosts => ["localhost:9200"]
timeout => 60
index => "dlq"
version => "%{[@metadata][version]}"
version_type => "external_gte"
}
}
The input is a normal log file which is formatted using grok filter.
Here the version is always a string rather than a integer and thus elasticsearch throws error 400 Bad Request. (This is a test case for logstash and done intentionally)
On this error code - logstash should retry a finite number of times and then should push this request payload to dead_letter_queue file (as per the documentation - https://www.elastic.co/guide/en/logstash/current/dead-letter-queues.html), but it gets stuck in an infinite loop with mesaage -
[2018-10-23T12:11:42,475][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff {:code=>400, :url=>"localhost:9200/_bulk"}
Following are the contents of data/dead_letter_queue/main directory -
1.log (contains a single value "1")
Please assist if any configuration is missing leading to this situation.