Bulk requests are retried indefinitely

I have an issue where a bulk request fails from the ELS output plugin. After reading the docs, I saw that these requests are retried indefinitely. However, this blocks the whole pipeline so no new events are consumed.

I have a dead letter queue set up and enabled so I would like for these bulk fails (which return a 400) to drop the message in to the DLQ and continue with others, not block the whole pipeline.

Is there any way to do this? Any way to specify the max amount of retries?

The actual error is:

[ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff  {:code=>400, :url=>"http://localhost:9200/_bulk", :body=>"{\"error\":{\"root_cause\":[{\"type\":\"action_request_validation_exception\",\"reason\":\"Validation Failed: 1: no requests added;\"}],\"type\":\"action_request_validation_exception\",\"reason\":\"Validation Failed: 1: no requests added;\"},\"status\":400}"}

I do not think so. "HTTP requests to the bulk API are expected to return a 200 response code. All other response codes are retried indefinitely."

But then the processing of other events is blocked until this one is dropped manually by restarting logstash. Surely there's a way to prevent this loop

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.