Are there any recommendations on how to cleanly flush all items that are in the Rest High Level bulkprocessor (configured with one concurrent request) as the JVM exits? We have a spring boot application and have tried to manually flush the bulkprocessor as the bean is destroyed and also the awaitClose, but in both cases the bulkprocessor does not seem to be flushing the items out to ES and the bulk items are lost.
This is using Rest High level client 7.5.1 and Elastic Search version 7.4.0
However please take a look at the javadocs, if concurrent requests are enabled, you need to specify the amount of time you want to wait. Also, what is the return value of awaitClose() when you call it?
Maye you can also mention how the bulk processor is configured.
Thanks for the reply. We currently have the bulkprocessor configured as follows:
BulkProcessor.builder(consumer, listenerr)
.setBulkActions(1000)
.setBulkSize(new ByteSizeValue(5, ByteSizeUnit.MB))
.setFlushInterval(TimeValue.timeValueSeconds(5))
.setConcurrentRequests(1)
.setBackoffPolicy(BackoffPolicy.exponentialBackoff(TimeValue.timeValueMillis(100), 3))
.build();
When we explicitly call awaitClose it returns immediately in the dispose method and we'll try and get the return value for more information. One thing we did notice is that the afterBulk method Throwable contains the Interrupted Exception (which I am guessing is expected as the JVM shuts down).
please check the return value of awaitClose - is it true or false? By default it does not wait as mentioned in the javadocs - that was the reason why I explicitely asked if you do check it.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.