I found that I can specify the number of attempts to retry a request if the connection is lost (below example for Python Client):
from elasticsearch import Elasticsearch
es = Elasticsearch(url, basic_auth=(user, pass), max_retries=10)
However, how can I specify a wait interval before such retries?
For example, if an Elasticsearch request fails, I want to retry 10 times with a 1-second interval between requests (ideally, I would like to wait for 1 second * 2**retry_number between retries)
I found that Elasticsearch bulk has such an option known as initial_backoff:
I don't know if it's possible but I'm wondering why you'd like it to be "slow"...
In a multi-node cluster which is what you should have in production, you should never ever have such a problem as one of the nodes should be able to be queried.
So I'm wondering why you would like to have such a behavior?
@dadoonet, I would say that due to the current network setup, there could be a situation that all my nodes are not available for a small amount of time
If so, the data failed to be delivered to Elastic, however, any retry could help me here
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.