Elasticsearch node stats write threadpool

When I checked the node stats for some of my nodes, I see below for write thread pool.

 "write" : {
          "threads" : 8,
          "queue" : 13,
          "active" : 8,
          "rejected" : 19958,
          "largest" : 8,
          "completed" : 11249210

Does this indicate that elasticsearch is unable to cope with the rate of ingestion and rejecting some writes? or is the r bulk api of high level rest client taking care of writing the documents using retry logic? can we increase the size of the write thread pool

It's impossible to say from these stats. Sometimes when a task is rejected from the write threadpool the corresponding write operation still succeeds. You will need to check the responses sent back to the client.

I don't think there's any built-in retry logic if you're using the Bulk API directly, although there is if you use the BulkProcessor. Either way it's best to record failures on the client (e.g. with logging) and handle them appropriately for your application.

That's unlikely to make much difference, there's a risk of rejections no matter how many write threads you have. Client-side failure handling is the most robust approach.

1 Like

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.