Not sure that is true. Most timers in logstash trigger every 5 seconds, so two changes within 9.99 seconds may only trigger one reload. I may be wrong, but that what would be my starting assumption if I was going to test it (which I am not).
You are right @Badger , the auto-reload interval is configurable, so I think that multiple changes between that interval will probably trigger just one reload if the end file is different from the one that logstash was running.
Logstash will reload the configuration and resume sending data to elasticsearch, if there is something wrong with the configuration it will fail to load the new configuration until it is fixed.
This has no relation with your original question, the last_failure_timestamp is related to a failure when trying to reload the pipeline nor when trying to send data to elasticsearch.
HTTP requests to the bulk API are expected to return a 200 response code. All other response codes are retried indefinitely.
And about errors.
The following document errors are handled as follows:
400 and 404 errors are sent to the dead letter queue (DLQ), if enabled. If a DLQ is not enabled, a log message will be emitted, and the event will be dropped. See DLQ Policy for more info.
409 errors (conflict) are logged as a warning and dropped.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.