Bulkprocessor Indexing Timeout

Each bulk request in the bulk processor waits over a semaphore whose max value is controlled by number of concurrent requests param given in the settings for the bulk processor. So lets say this value was configured to be 16, which means after the bulk processor has sent 16 requests it will wait for the response of at least one of these 16 requests to come back before sending the 17th request to cluster.

Now let's say if the ES cluster is responding too slow and is taking few minutes to respond to one bulk request or in worse case is not available after the bulk request has got into the cluster, will the bulk processor infinitely wait in a hope to get response from the cluster ?

In short, I want to know is there any timeout configured for such scenarios. If yes, can we modify it ?

2 Likes

Yes, bulk requests never time out.

You could try patching the org.elasticsearch.action.bulk.BulkAction class. It is missing the timeout setting in the options.

Replace

 @Override
    public TransportRequestOptions transportOptions(Settings settings) {
        return TransportRequestOptions.builder()
                .withType(TransportRequestOptions.Type.BULK)
                .withCompress(settings.getAsBoolean("action.bulk.compress", true)
                ).build();
    }

with

 @Override
    public TransportRequestOptions transportOptions(Settings settings) {
        return TransportRequestOptions.builder()
                .withType(TransportRequestOptions.Type.BULK)
                .withTimeout(settings.getAsTime("action.bulk.timeout", TimeValue.timeValueSeconds(60)))
                .withCompress(settings.getAsBoolean("action.bulk.compress", true)
                ).build();
    }

and then you have a timeout on your bulk requests, by default 60 seconds, or change the configuration parameter action.bulk.timeout. (There may be more work left to do to let ES know about the new configuration parameter.)

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.