We are creating the BulkProcessor object as shown below. While some upserts are processed successfully, subsequent upserts are not being completed. This issue is occurring in a high-scale environment where the upserts requests are high in number. We are adding requests using bulkProcessor.add(request);
. When a heap dump was taken, we can see the requests present in the dump.
Are there any known issues with this approach, or have similar problems been reported previously?
For reference, we are using Elasticsearch version 6.8.12.
bulkProcessor = BulkProcessor.builder(bulkConsumer, getBulkProcessListener())
.setBulkActions(2000)
.setBulkSize(
new ByteSizeValue(100, ByteSizeUnit.MB))
.setFlushInterval(5) // in seconds
.setConcurrentRequests(2).build();