Does this indicate that elasticsearch is unable to cope with the rate of ingestion and rejecting some writes? or is the r bulk api of high level rest client taking care of writing the documents using retry logic? can we increase the size of the write thread pool
It's impossible to say from these stats. Sometimes when a task is rejected from the write threadpool the corresponding write operation still succeeds. You will need to check the responses sent back to the client.
I don't think there's any built-in retry logic if you're using the Bulk API directly, although there is if you use the BulkProcessor. Either way it's best to record failures on the client (e.g. with logging) and handle them appropriately for your application.
That's unlikely to make much difference, there's a risk of rejections no matter how many write threads you have. Client-side failure handling is the most robust approach.
Apache, Apache Lucene, Apache Hadoop, Hadoop, HDFS and the yellow elephant
logo are trademarks of the
Apache Software Foundation
in the United States and/or other countries.