We are using Logstash 6.2.4. We have two outputs configured: all data goes to Elasticsearch and some data also goes to an HTTP endpoint that is not always running when we are testing. In the case when the endpoint is not running, we receive a connect timed out error (fine), but Logstash keeps trying to send the data over and over even though we are using the default automatic_retries of 1. How can we configure Logstash to only retry a limited number of times?
Also, after a while of retrying the HTTP output, Logstash seems to give up on either processing new input or on sending it to Elasticsearch (haven't traced down where the problem is, just know that out data no longer gets into Elasticsearch).
Well, yes. If the pipeline is full then backpressure will prevent it ingesting additional events.
If I am reading the code correctly there are two levels of retries. automatic_retries configures the inner one. retry_failed can be used to disable the outer one.
I'm not sure what you mean by inner and outer retries. When we set retry_failed to false, it won't retry at all. Why does it retry continuously when automatic_retries is set to 1 (we even tested setting it to different values, including 0, 1 and 2 and it still retried continuously).
I am not convinced that that it true. logstash will not log that logstash is retrying, but I don't think the inner retry in the Manticore client is disabled.
Thank you for your quick responses. I will do some more testing to verify, but it sounds like what we want is retry_failed set to false and automatic_retries set to some reasonable number.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.