How to specify "wait several N seconds before retry" in Elastic Python Client?

Hello guys!

I found that I can specify the number of attempts to retry a request if the connection is lost (below example for Python Client):

from elasticsearch import Elasticsearch
es = Elasticsearch(url, basic_auth=(user, pass), max_retries=10)

However, how can I specify a wait interval before such retries?

For example, if an Elasticsearch request fails, I want to retry 10 times with a 1-second interval between requests (ideally, I would like to wait for 1 second * 2**retry_number between retries)

I found that Elasticsearch bulk has such an option known as initial_backoff:

Elasticsearch Helpers

However, there's nothing similar for Elasticsearch itself.

How can I achieve this?

Thank you!

I don't know if it's possible but I'm wondering why you'd like it to be "slow"...
In a multi-node cluster which is what you should have in production, you should never ever have such a problem as one of the nodes should be able to be queried.

So I'm wondering why you would like to have such a behavior?

@dadoonet, indeed, however, "should" does not mean "must". Retry mechanism in this situation will help to achieve this "must"

To add here

@dadoonet, I would say that due to the current network setup, there could be a situation that all my nodes are not available for a small amount of time

If so, the data failed to be delivered to Elastic, however, any retry could help me here

Thus, I asked

Thank you!

Hello, this is not possible using the Python client, but it would be a good enhancement to add.

Hey @Quentin_Pradet,

Thank you for the confirmation!

Is there any bug/feature ID, so I can subscribe to it, or should I create a new bug/feature request myself?

Thanks!

Hello, and sorry for the delay. You can follow this issue: `retry_on_status` setting does not work as expected with requests that should not be retried immediately · Issue #2485 · elastic/elasticsearch-py · GitHub.