Logstash throttling/caching


I'm using Logstash with a pipe input and an elasticsearch output. I know that my ES cluster has had availability problems recently. There were brief periods when the cluster was unavailable, and I want to ascertain if events were lost.
In the logstash logs I can see stuff like the following:

[2017-07-06T02:15:08,696][INFO ][logstash.outputs.elasticsearch] retrying failed action with response code: 429 ({"type"=>"es_rejected_execution_exception", "reason"=>"rejected execution of org.elasticsearch.transport.TransportService$6@151e135d on EsThreadPoolExecutor[bulk, queue capacity = 50, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@5e1ca942[Running, pool size = 6, active threads = 6, queued tasks = 50, completed tasks = 10293316]]"})
[2017-07-06T02:15:08,696][ERROR][logstash.outputs.elasticsearch] Retrying individual actions
[2017-07-06T02:15:08,696][ERROR][logstash.outputs.elasticsearch] Action

There is no other event like "retried action was dropped, too many retries". However, I'm not sure what this means - is the retry successful, at some point, or is the data lost?


I think you can assume that the events eventually reached ES, unless you restarted Logstash while events were still in flight (unless you also enabled the rather new persistent queue feature).

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.