Recently, in our logs of logstash instance we have this error:
ERROR][logstash.outputs.elasticsearch] Encountered a retryable error. Will Retry with exponential backoff
{:code=>504, :url=>"https://<hostname_elastic-cloud_cluster:9243/_bulk"}
Anybody knows the reason of this error? When this error happens, is it possible that any event doesn't arrive to Elasticsearch?
Code 504 is "Gateway Timeout" ... but the cluster is always up without interruption.
Something in the networking layer between Logstash and your Elasticsearch cluster is timing out while establishing a connection to Elasticsearch. As noted in the log message, Logstash will retry the bulk request with (capped) exponential backoff, which means that it will wait longer between retries if it takes multiple attempts.
The Elasticsearch Output Plugin for Logstash will emit this ERROR message each time it encounters the error, and will continue to retry until success occurs (or, if configured with the Dead-Letter Queue [DLQ], once a configured limit is reached the retry will be aborted and the events added to the DLQ).
Are you seeing these log messages repeatedly for minutes on end, indicating that we are stuck in a retry loop? Does processing appear to have stalled out?
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.