What does "Could not write all entries for bulk operation" mean when I add two more data nodes into the cluster

I added two more data nodes into the cluster, but I got the exception as follows only for these two nodes, but not from other existing nodes.

org.elasticsearch.hadoop.EsHadoopException: Could not write all entries for bulk operation [12/987]. Error sample (first [5] error messages):
org.elasticsearch.hadoop.rest.EsHadoopRemoteException: es_rejected_execution_exception: rejected execution of processing of [333641][indices:data/write/bulk[s][p]]: request: BulkShardRequest [[service-log-2019-10-04][1]] containing [6] requests, target allocation id: aRkGNw45R8CXopW-wRHc4A, primary term: 1 on EsThreadPoolExecutor[name = elasticsearch-data-4/write, queue capacity = 200, org.elasticsearch.common.util.concurrent.EsThreadPoolExecutor@26bcf15f[Running, pool size = 20, active threads = 20, queued tasks = 299, completed tasks = 249962]]

I am using Spark to dump data into the ES cluster.

Thanks

looks like it is copying data from existing nodes to the newly added nodes. Also it only schedule to write data to the new nodes. will this not the new data nodes overloaded?

This topic was automatically closed 28 days after the last reply. New replies are no longer allowed.