Encountered a retryable error in Logstash 5.6.3

I'm using logstash 5.6.3 with filebeat input and output to elasticsearch. Dealing huge amount of logs, filebeat reads about 100GB logs. In the begining , everything works fine, but after about 30 minutes. Logstash get this errors:

[2017-10-26T18:22:44,444][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://127.0.0.1:9200/_bulk", :body=>"{\"error\":{\"root_cause\":[{\"type\":\"parse_exception\",\"reason\":\"request body is required\"}],\"type\":\"parse_exception\",\"reason\":\"request body is required\"}
,\"status\":400}"}
[2017-10-26T18:23:17,262][ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://127.0.0.1:9200/_bulk", :body=>"{\"error\":{\"root_cause\":[{\"type\":\"parse_exception\",\"reason\":\"request body is required\"}],\"type\":\"parse_exception\",\"reason\":\"request body is required\"}
,\"status\":400}"}

this log keeps showing. When this happens, elasticsearch didn't get any errors. There still has data ingest into elasticsearch.
I had DLQ enabeld, but there is no event in the dead_letter_queue directory. Only one file 1.log in it, and there is no content in the file.
the setting of logstash.yml is:

config.reload.automatic: true
config.reload.interval: 3
queue.type: persisted
queue.max_bytes: 50gb
queue.max_events: 0
dead_letter_queue.enable: true
4 Likes

Did you solve this?
I am hitting a similar issue, (looks the same): Log full of "encountered retryable error" and logstash stopped listening

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.